doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/20797 (DOI)
Thank you for the introduction. Thank you for inviting me here to Republica. It's actually first time in a very long time in Berlin. So, I'm a politician. That took me a very long time to learn to say, but hey, I'm a politician. Today's topic is going to be how we apply open source collaboration to changing policy in the world. We've worked with open source and free software for at least 30 years by now, right? But these principles, it turns out, can be applied for much, much larger things. And we've learned how to do that. That's what I'm here to share today. My Twitter is Falk Vinge. I love seeing my name on Twitter, regardless of whether it's good, bad, any mention is better than no mention. For convenience, it's also on every slide if you should forget it. So, a quick introduction. How many in here have heard of the Swedish pirate part before? Let's see a show of hands. Okay, when I do this talk around the world, that's usually between one half and two thirds, but here was practically everybody, which was kind of expected actually. Just for kicks and loaves, how many have heard of any other Swedish party? One, two, a few. So, yeah, scattered hands like the rest of the world. And I think that's kind of fun because it shows just how much a lifestyle movement this is, and people hear of it. Even though we're a fairly minor party in Sweden, people have heard of us when I speak in San Francisco, when I speak halfway across the planet. So, a very quick introduction is that we love the net. We love copying and sharing. We love civil liberties. People call us pirates for that. But rather than being ashamed, which I think is their intention, we decided to stand tall about it. And we've now been rewarded with two seats in the European Parliament, 19 seats in German regional parliaments, almost 200 seats on local councils across Europe, and we exist in 56 countries at last count. So, today's speech is going to be a little bit of what experience did we draw on building activism, working swarm-wise. Compressing six years of pioneer organization theory into 40 minutes means that we are going to cover a lot of topics and only cover the most important parts of those topics as we go here. But I can almost guarantee that there's going to be something interesting here for everybody because we've learned so much in how you just changed the world. But first, what is a swarm? I see it as a new form of organization that's been enabled by affordable mass communication, where somebody could, today somebody can organize hundreds of thousands of people in their spare time from their kitchen. That was a physical impossibility just 10 years ago. It was not anywhere near reasonably possible. So, seeing that ordinary people can get this kind of movements going, what do we learn from that? Some swarms are leaderless, like anonymous. While they certainly can have an impact, I find that a hybrid where you mix in just a few percent of people who take formal responsibility with a huge swarm of activists can make the best difference. It also allows you to interface to the old kinds of organizations as they look for something that resembles them, and if you have just an interface layer that does, then you can work with them. So, that's how I see a swarm. And the most important thing about it, the most important contrast to old-style organizations is that in a swarm, the focus is on what everybody can do all the time. In contrast, if you're working in a corporation or in an NGO, the focus is usually what people must do or what they cannot do. Here, the focus is what everybody can do. So, it's a huge enabler. Starting from the top, how do you bootstrap a movement? Say you've got something interesting you want to do. How do you do that? How do you go about planting the seed that becomes something that spreads to over 50 countries? And, hey, it takes place in parliaments. Turns out, it's not enough to be interesting. You need to do three things. You need to be tangible, credible, and inclusive. And, of course, interesting. You need to be tangible. You need to be very clear about this is what we're going to accomplish. We are going to do this, and it has to be so clearly expressed that people will realize if they are going to spend their spare time helping this goal succeed because it coincides with their personal goals, that is the key to having the swarm succeed. People must realize that an hour spent in the swarm helps their personal goals better than if they had spent it on their own. You need to be credible. You need to show in the plan that this can be done. And you also need to reinforce that every single day. We can do this. We can do this. We can do this. That's part of psychology because usually a swarm forms around something utterly impossible. If I'd say, if I told you guys seven years ago before the party was started that, hey, let's form a new party that spreads to over 50 countries and changes the world. What kind of idiot thinks they can do that, right? But turns out you can. This is where I have my famous line in as in, don't shoot for the moon. That's been done already. Shoot for Mars and show that you can do it. And you need to be inclusive. Everybody needs to see immediately that they can contribute to this project and how they can do it. It's not enough to say that we're going to do this and you're not welcome. We are going to do this, period. You need to say, we are going to do this and this is how you can help it make it happen. So looking at a few, once you've done that, you just need to publish this, publish your ideas, and it will find its way to social news sites. You don't need to worry about it finding its way if it's interesting enough. When I found it in the Swedish Pire Party, I put up a really ugly website with this kind of plan. And I mentioned it once in a chat channel in the file sharing lobby. Just once. I wrote two lines. Hey, look, the Pire Party has its website up now after New Year's and the address. That was all the advertising I ever did. The next day it was in newspapers. And on the evening, there were 300 activists more or less holding out their hands to me saying, give me something to do. I want to be a part of this. So speaking of activists, you also need the focal point as in if you want to be a part of this, here's how you can. So looking at a couple of examples of how you do and don't do this, I've seen many initiatives start out like this. Essentially having a huge game of bullshit bingo which draws no interest whatsoever. And after that, people ask themselves, what did we do wrong? We didn't draw any interest. No. Maybe that's because you're trying to raise quarterly profits by up to 2%. You're not going to Mars. You're not changing the world here. Another example I see much too often is this kind. As in people just want to do something but they don't really know what, they don't know how, and in particular, they're just thinking that something is fun. Of course, things should be fun and I'll be returning to that. But you have to do the homework. When I started the Swedish Pire Party, I knew that the math was there. There were 1.2 million people sharing culture in Sweden and they were being demonized. If one fifth of those were angry enough, we'd have a new party in parliament. You need to do the math. So a better example would be something like this. We're going to create 1 million new tour notes and we're going to get a tour client installed in 25% of the installed browsers by a user count. This is doable. This would change the world forever. This would totally drop kick every politician's aspiration of surveillance forever and it's completely doable. Anybody's welcome to actually do this, by the way. I just used it as an example. So how do you survive the onrush of hundreds of activists? Like I said, 300 activists holding out their hands to me on the first day. How do you deal with that? Obviously, you can't talk to them one by one. So the only way to survive that is to have themselves organized into subgroups. I suggest 30 subgroups because of a magic number that I'll be returning to, 7, 30, 150, are magic numbers when it comes to the human psyche and how we are social in groups, group sizes. I suggest 30, but the important thing is that you divide them by geography and I'll be returning to why later. So what you do in Germany, you'd essentially have, you'd probably go by the 16 states, so I should have said up to 30 subgroups. You use something that's natural for the country where you start. And that's how you just kick-start it. You let them self-organize, you just tell them to go there, go to the forum of your geography, elect a leader, I don't care how, just elect a leader among yourself and get that leader to come to me. That's the kick-start. That's the easiest part. So how do you move on? You need to build a scaffolding of this platform. You need to build a hierarchy of leaders. And many people sort of pull back at the sound of leaders. And it's important here that these are not people that can't tell the swarm what to do. These are people that are responsible for having the swarm functioning. So it's typically 1 to 5% that there is a traditional hierarchy in a swarm. And you need leaders for every geography. So you'd start out by having a leader for Germany, one for Berlin, one for Mitter, for example, and you organize them in a geography tree. What that gives you is the ability to partition this swarm. Because once you hit these size-number ceilings, the swarm can no longer grow. Many, many of these initiatives start out, then hit a magic number of 30 people in size, or 150 people in size, and can no longer grow. What this gives you is the ability to work locally and still coordinate across the entire organization. And the most important thing, their role is not managerial. They don't get to tell people what to do. Think of them as janitors. Think of them as responsible for maintaining the swarm. Think of them as responsible for making sure that there are flyers, there are folders, there are things happening in the city that the actual activists can come and join. I suggest you'd, in building the geography tree, you'd got five subgeographies per geography. If you had Berlin, you'd have five geographies under that in the tree. The reason for that is that that gives you a group of seven people leading that particular geography. Yourself, your deputy, and those five leaders under you. The reason I'm going for seven here, this is really a magic size, is that if you have a tight working group larger than seven, it starts breaking down. A group of eight will almost invariably fragment into four plus four. Seven is the largest size we can manage. The next size is 30, which is a typical classroom. Or for that matter, a platoon. And the next size is 150, which is a tribe size. That is how many we're able to know by first name. I'll be returning to this, but these many, many organizations, informal organizations, hit these ceilings and then are incapable of growing more. You need to actively break up groups that hit the ceiling or growth will be prevented. Let this lead a tree grow dynamically. Just create the empty boxes across the entire country. You might start appointing the very first leaders and they will start appointing leaders in turn for cities, for city parts. And the first time you see somebody you've never heard of becoming a leader in your organization as in becoming responsible for a part of the city, it feels magical because you know the system is working. You realize that something is going on here and you're building something that goes beyond just yourself. Right, I mentioned those numbers already. Keep those in mind. In particular, there will likely be an initial IRC channel or a chat of some kind with the first starters. Once that channel hits 150 people, you'll hit the first ceiling. That's the first time you need to break this up. Regular meetings in cities. People meeting each other is what builds an organization. And I'm not talking about protocols here. I'm not talking about formal meetings. I'm talking about pizza and beer. Don't overbureaucratize it. You don't need protocols. You need to have people's face to names. And you need to have a smile with a handshake, which is again why you benefit from dividing the swarm geographically from the get-go. When I had meetings at the party leader level, I was very adamant that meetings are not work. Meetings are where you report on the work that you've done between the meetings. So don't go thinking that as long as we just have meetings, we're working. That's not the case. Also, I made a habit of saying that we are meeting for exactly one hour. We're starting at eight. We're cutting off hard at nine. That meant two things. A, people could always plan for getting off at nine so they could do whatever they wanted with the rest of the evening. And B, whatever didn't get done in one hour wasn't important enough to get done at all. That creates a very hard prioritization and it stops people wasting other people's time. So I suggest having a hard stop in every meeting. So once you've set up this boring traditional hierarchy, the important stuff comes. As you let loose the actual swarm, the tens of thousands of activists, the Swedish party still has 18,000 of them. And the focus is on what you can do, always on what you can do. We have something called the three-pirate rule in Sweden. And that means that if three pirates are in agreement that something is beneficial, that means they're beneficial for the party. That means they have green light from the highest authority in the party to act in the party's name. Traditional organizations would be absolutely scared out of their wits to give anonymous people just heads in the background this kind of empowerment. But guess what? I was the party leader for five years. How many times do you think this was abused? In full five years and across three elections, it was not abused once. Not once. And we peaked at 50,000 members who all had this power. So that's a lesson. If you let people step up to the plate, they will accept the responsibility that comes with it. Oh. And diversity here is key to success. The important thing is that everybody in this form is different. And that is one of the best assets this form has. We let people look at what other people are doing, not just inside the organization but outside it as well. We copy. We look at hey, that was good. That was good. Let's change it a little bit and then use it in our city. If you see a good poster being used in one city, you can see it sort of organically flowing through more cities without you at the top needing to do anything at all. It's part of the small culture. It's part of letting people observe copy, remix, reuse. And perhaps one very important aspect here is that you provide the vision. This is what we're going to accomplish. And you repeat that almost every single day. We can do this. We can do this. We can do this. No matter how impossible it is, we are going to Mars, God damn it. We are going to get in Parliament, God damn it. Yeah, we did. And the swarm does the talking. You talk to the swarm, the swarm talks to its friends. And why is that? That is because different social context use different languages. If you're, and this goes against every piece of marketing you can ever read in an MBA. But the key here is understanding that most people just try to create a one-sites fits all message and then broadcast that to the entire country. Well, frankly, that sucks. If I'm standing in front of a libertarian crowd, I could say that I think it's very beneficial for the long-term economy growth that a link in the value chain can be cut out of the distribution logistics. And therefore we can connect consumers directly to producers of culture, creating new jobs in the cultural sector and long-term and creating opportunities for long-term growth as we get rid of this dead weight in the distribution chain. If I'm talking before a Marxist crowd, and I've done that too, I would say that I think it's absolutely amazing that the cultural workers have finally assumed control of the means of production for their own sweat and labor and are able to cut out these parasitic profiteer middlemen who have been profiting unjustly off of their sweat. And I'm saying the exact same thing. Hence, language is a very powerful social marker of inclusion and of exclusion. If you're using the wrong language in a crowd, they will disagree with you no matter what you're saying. Hello, train. So this is why you let the swarm do the talking and you trust them to do the talking because they do it better than you to their friends. So everybody has the right and the duty to talk in the name of the swarm in their own language. That's key here. Without asking anybody's permission, just like I said in the get-go, you don't ask permission. You know that you're empowered to act. After all, you join the swarm because it coincides with your own personal goals. Project management. How many here have worked as project managers? Let's see. I'll show you hands. Okay, that's a couple. So for the rest of you, the first time you let a project self-organize, it feels like bloody magic. You just tell a crowd and you're full of passion that, hey, we're going to do this. We're really going to accomplish this. But you don't tell them how. And the magic is you don't need to because if you get the crowd electrified over the magic of reaching the goal, they will self-organize into making it happen. What you need to do is to be very clear on what the goal is and communicate how far we've come towards the goal. And you need to do that every single day or at least every single week. You need to reiterate, we can do this. We can do this. We can do this. Remember, you're in the center of the swarm. You need to broadcast the message to the entire swarm in all directions. As where we are, you need to help people explain to their friends why they should be joining. You need to tell people what the problems are and what the indicator would be that we've overcome them. So again, self-organization, it works. Democracy sucks in a swarm. And why is that? We've touched on that a bit already. The reason is that democracy is one mechanism for conflict resolution. Conflicts are when Alice and Bob disagree on what should be done. Bob thinks that Alice should do something and Alice thinks that Bob should do something else. Two people are in disagreement on what the other one should be doing. But we've already said in a swarm that one of its keys is that people act individually according to how they think the swarm is best served. And that's why it's so powerful. You don't need that kind of conflict resolution. And it's worse than that. It's actually harmful. There are four ways to solve such a conflict. You either require consensus among everybody before you do something. You hold a vote in which 91%, 51% wins over 49%. You have a dictum from the top that determines the outcome of the conflict, which is bad, very bad in a swarm. Or you don't let conflicts arise in the first place because everybody is empowered to act in the name of the swarm. And nobody gets to tell anybody else what to do. And this is how a swarm works. Nobody gets to tell anybody else what to do. If you have a vote, that means that 51% of the people get to say what 49% cannot do. And you create losers. Voting creates losers. That's part of how the process works. But we've already seen that diversity is key to the swarm's success here. Those 2% that if called to a vote would just be shot down in lightning and flames might very well be crucial to the success of the swarm because only they can explain the swarm's vision to a key group for the long-term goal. If we had a majority vote on what to do, those 2% would never be able to explain that. Everybody is empowered. Don't hold a vote and shoot them down. Maintain a power base. Again, this sounds very traditional. But what I'm talking about is that as soon as you have started getting some sort of success in your swarm, you will inevitably get a number of organizational astronauts that have only seen traditional organizations and know how they work. And they'll come and there's this saying, right, if everything you have is a hammer, everything, everything looks like a nail. And they'll come with their hammer and start banging at your swarm, at your organization. But that's not how this swarm works. You'll have many, many people insisting that this is a great initiative, this is a great swarm, we are on a great route to success, but you must make this and these and these and these and these changes because that's how I know an organization works. And if you do that, you're going to lose the key values. You're going to lose the key goals. And some things need to not be up for discussion. Remember now, people join the swarm because they feel that the goals of the swarm coincide with those of their own. There's an implicit assumption there that the goals of the swarm are clear. If those become up for discussion, if the goals of the organization and the methods of the organization come up for discussion, then nobody will know what the swarm is for. So you could just as well be pulling an emergency brake on new recruitment. It needs to be absolutely clear what this swarm is about. And that needs to preferably not change at all, or if it changes, that needs to happen very, very slowly and in a controlled manner. Again, you'll have no shortage of people trying to hijack this swarm for their own favorite purpose. We should sell mustard instead. We shouldn't go to Mars. Well, it's nice that you want to sell mustard. I like mustard, but this swarm is about going to Mars. Social connections. That's our forte, right? We know a lot about those. There are two key insights with social connections. First, the swarm only grows on its edges. You sit in the center, everybody around you has already heard of your project. As you broadcast status, as we spoke about in project management, as you broadcast all the time that we can do this, we're here, we've come a long way, we can do that, we're almost here, we're halfway, etc., you're broadcasting that to the edges of the swarm. And that's where it can grow. That's where there are people who have recently joined and who have friends who might be interested. As a corollary to that, it is absolutely vital to have fun. That's more important than just having fun in your life. It's actually crucial to the swarm success because people go to where other people seem to be having fun. The more fun you seem to have, the more activists you'll recruit. It's ridiculously simple, but it's actually as simple as that. Having fun is crucial to the end game success. And as we spoke about, there's this activation ladder. I'll be mentioning that more shortly, but as you activate towards the swarm, you initially have hear of it somewhere, then you may be contacted or contact somebody that self-identify as this movement, you go to your first meeting, you gradually climb up what we call the activation ladder. And it's important to understand that at the edge of the swarm, those have just recently joined. They are not aware of what happens around you. They are not aware of the entire history of the swarm. You need to constantly reinforce this. You need to help them climb this activation ladder. You need to repeat and repeat and repeat with every new wave towards the edges of the swarm. Then once you start a project, examine the trajectory that new activists take, and if there's particular step on the activation ladder, that seems hard to climb. Like, is it hard to become a member? Is it hard to become an activist? Is it hard to find the first meeting? Is it hard to get in touch with somebody? Each of these are questions that need to be asked in order to have a successful swarm. And finally, poem the media. And what I mean by this is that you need to own your issue in media. If you want to go to Mars, every single time Mars is mentioned in a newspaper or in a story, the reporter should think of you. What this does is that it helps them write the story. They'll call you asking for quotes. That means you get to be a part of media. That means you're building the brand of your swarm. And the deal here is that you need to think like a reporter. Think in terms of how they're writing the story. As they sit down to write a story, what do they need? Give them that. Usually, it's quotes. These people say that, those people say that. If you can help them get that, you're helping them do their job. If you're helping them do their job, you'll get quotes in media. Newspapers, radio, television. Hey, it's old media, yeah, but it's still got a lot of influence. So the key here is that you need to watch the same news sources as the reporters do. You need to catch the news at the same time they do. And you need to realize that at the same time they see the news, they start writing an article about it. When that article has hit the papers, it's too late because the article is finished. Same thing when it hits the online websites. You have at most 60 minutes, probably 40. Because once the reporters have written the story, they're moving on to writing about something else. You need to give them your quotes in their hands, in their email box, or whatever it means have you, while they are writing the story, while they are thinking of who could I possibly quote on this? That's a magic moment when your quote needs to appear. Oh, just what I needed. You need to train on this because this is hard. From seeing an event, deciding that this is press release material and getting the press release out there. That's really, really hard cut down. Here down to 25 minutes, one key is using an other pad or another pad clone, like piratenpad.de. For those of you who haven't seen it, it's a multiply notepad. You train on writing press releases, you just jump in there, five to seven people, and you write it all at once and then send it off. Doing this on your own becomes very hard. And if you are demanding somebody to verify it, there is absolutely no way you're going to get down to the 40 minute limit. You need to trust the people awake at the time to do this. And media loves conflicts. This is actually the last slide, but it's very relevant, very, very relevant. If you portray one event as, well, I'm not, we might see a small problem with the implementation of this directive, nobody gives a shit. If you say that these politicians are behaving like drunken blindfolded elephants trumpeting about in a porcelain factory, then your quote is banned in the newspapers. If you're colorful, if you're provocative, if you're creating conflict, then you are going to be in media. Because that's the conflict narrative is what rules old media right now. Newspapers, radio, television. You need to pick, you literally need to pick fights. So wrapping up, this has been a short version of a book that's due later this year, swarm wise. I've been picking the most important topics in a very concrete how to, in how to change the world. As in dealing with media, building the organization, trusting people, doing project management. It's due later this year, and in the Swedish pirate party, we do have software to do all this. We've built it ourselves, unfortunately a lot is hard coded at this point. It's public domain, of course. A lot is hard coded at this point, but we are generalizing it so that other organizations can use it. And if you would be interested in having the abilities that are just described, we are looking at pilot applications from organizations that rhyme well with our values right now in terms of net liberties and so on. It does member officer activists and volunteer management at every geographical level so that a city leader can manage his or her area completely independently from every other part of the organization. It's entirely decentralized. It is empowering at every single level. And once, if you have a member sign up, that member writes his or her address, and he or she is directly placed at the right level in the geography and the people responsible are now defined. So they get a mail saying, hey, there's a new member here. Why don't you give him or her a call and just welcome them? That would be utterly impossible if you didn't have that kind of decentralized automatic sorting organization. And it also does press releases, which I just described. We're just typing them in a WordPress blog. And once we get published, it's sent to the categories of reporters we described. Civil liberties reporters, technical reporters, political reporters, or local reporters for that matter, or all of them. And it does quite a bit of other things too. So if you would be interested in taking part of the pilot starting late this summer, please contact me afterwards. And that's it. Questions? Hi. My question is a little bit meta question. Would this, let's say, let's call it technology, or how can I say, approach. Does it have interstitial democratic? Excuse me for interrupting. To those leaving right now, I just want to say before you leave, thank you for your attention. Okay. Sorry. Well, my name is Philip Stanopska. I come from Macedonia. And I'm interested in whether you see this approach, which is obviously in a democratic fashion, but having some interesting value which would create, which would influence the democratization and liberties in society as a whole. Or could this, for instance, software, which has a lot of stuff management capabilities, be used by people who are organizations which are not democratic and then increase their power to become more controlled, to control more people instead of empowering them. So I think that's, I think that's an excellent question. First of all, like when I said a bit provocative here, democracy sucks, that was referring to internally within this war because everybody's empowered. What this also does, of course, is that it brings, it flattens the organization tremendously. And being a political organization, that means that there's no distance anymore between the elected and the activists or for that matter the voters. So I think it has a tremendous democratizing effect to let everybody, to trust everybody because it removes distances. As in how you could certainly use this as part of a democratic organization. I mean, after all, we are a political party. We are upholding the democratic system. So yes, you could use it as part of the democratic system. But could you use it as to support a dictatorship? I guess you could do that. After all, it's managing people in an organization, but it's not built for that. So it's not going to be very supportive. It's built to let everybody have a voice. And dictatorships don't usually like that. Exactly. So it's not going to help them very much, even though they could try to use it. I think it's, I think it would be a bit like a square block in the round of the hole. Hi, Daniel Schwert from Pirate Party, North Runner's Failure. Do you know about how many pirate parties worldwide have the three-pirate rule implemented? Because I think sometimes in Germany, especially, we have the problem about legitimation. What's about, do you know, is this often used, these three-pirate rules worldwide? I don't know how many use the three, how many pirate parties use the three-pirate rule. I do know that the Swedish and Finnish pirate party use this software we developed. I know that Pirate Party UK was interested in using it. But I think in general, that we are sharing the same kind of philosophy that trusting activists is a good thing, although we might go about it in different ways. I use that as an example here in terms of just how powerful this warm can be if you just let it. Well we have the problem that usually you have to search for legitimation if you want to do something. And usually in Germany especially, you have to go through the whole, the meeting of all the party members to find something out and to develop a strategy or something. And the three-pirate rule would be something which would be very much more effective or very much more faster to come to the solution. So do you think it would be a good idea to start again with the three-pirate rules in other countries? I think you could pick it up at any time you want. And what you say here was one thing I forgot which is actually key. And that's the realization that no matter how much you prepare for a specific event or a campaign, it can always go wrong. It can always go wrong. You can have a huge budget and an enormous advertising bureau and you can still come up with the most ludicrous mistakes. And once you realize that the percentage of things that go horribly wrong are fairly constant regardless of how much you prepare, then you can instead optimize for speed and trust. Once you know that a small amount of things will go horribly wrong, then you can go into this Zen mode and realize that, yeah, things will go wrong. We'll deal with that when it happens. Let's optimize instead, as you say, for speed. Thanks. First of all, thanks for your talk. I find it really interesting how flat and hierarchies are being explored in the political process. I know them from business, actually. I know actually some businesses in the US which try to do that where actually everyone is fully empowered and even the owners of the company have to pitch to their employees to get a certain idea done. And if they find a way to do it, then it doesn't get done, which is quite interesting. But my actual question is something else. Could you show a bit of the software? I'm sorry? Could you show a bit of the software? I can't because there's no Wi-Fi. But you could, no, you don't have a login. You could go to PirateWeb.net where it's currently at and you'll just see a login screen and not get any further. You'll see the new generalized interface if you go to pirate.activizer.com and you'll also just see a login screen. But I can demonstrate it to you later if you like once we find Wi-Fi somewhere. Thank you. Just grab me if you have a phone with Wi-Fi or something like that. I've got a quick question about your numbers, 7, 30 and 150. Can you explain about the jump between 7 to 30? Because I can kind of understand logically the difference between a group of 70 and 150, but what is this 30 number coming? What do you use that for? It's 50. 150 is the maximum size of a tribe. That's the number of people you can know by first name. The 7 is the optimum working group that you're working daily with. And 30 is the in-between stage which is much more fuzzier but that you could still get a feel for. It's the amount of people that you know something about. It's the amount of people you can successfully work with when you're working several teams in parallel. It would be your extended family. It would be a class in school. It would be your group of project teams rather than your single team. And once you see, you'll easily see meetings in a city start hitting 30. And this would be the typical example. Once you see physical meetings start hitting 30, you hit the ceiling. You need to break up that geographical area into two parts, like north and south parts of the city or similar. Okay, thank you. I got a question to the way of information input for the whole swarm in your idea or opinion. Could you speak closer to the mic? Yes, of course. I got an idea or maybe a question about the information input for the whole swarm you described to us. How can you manage or the swarm managing the information input when one part, maybe the number of three, got an idea creating it and all the other part get this information? But in my opinion, it's get a real big problem we also often got here in Germany for the German pirate party, that you got an idea and you have really, really hard to search in every part of the information networks. If it already done, is anybody ever also working on it? Or did I something totally wrong because it's not the opinion of the other parts and I working against it or make a double or third time? How we can manage that better? I think you're onto a very crucial issue here that I didn't mention, which is first, if something good happens, how can you publish it for others to see? Second, if you do something, how can you know if it's appreciated or not? And third, if you have a proposal for the whole party, how can it get visibility in order to get support? Was that correctly summarized? And there are a few ways to accomplish this. One is to have some sort of centralized information bank like the Swedish pirate party started out with a forum. The German pirate has a wiki. But at the end of the day, this is up to every swarm. But as you need some sort of centralized information repository where people can at least post things that they want to become part of this war. Getting visibility for them is as hard as anything, drinking from the constant information firehouse. But as for the middle question there, how do I get appreciation when I did something good? This, I think, is absolutely key that the leaders of this forum, the geography leaders, the people who take care of this forum, see, recognize, and reward just with their attention. That's their job and that's crucial to getting this reward culture that you need to have fun. So I don't have one specific answer for what kind of information infrastructure do you build. What I do say is that you need one and it needs to be official. I'm not sure that was a good answer, but that's the best I can give. Yeah, you said within the swarm, democracy sucks. And you mentioned about conflict resolution. And you said, let Alice do and let Bert do what they want and then give them the trust. But how do you do it, for example, with political content? Let's say about in Germany, free riding in the public transport, let's see, is it a favor piece against how to solve those content conflicts? This is a very good question and my specific example here was if you're going to Mars, well, you're not going to midway change your mind and go to Jupiter's death. So you don't need to vote on that. But if you have a political party, that means that by nature your goals are changing. Your platform is gradually evolving. And in my in in the book, I described this in greater detail. And I portrayed the German Piratenbatae as a great example because what the activists swarm cannot afford is to have half its activists be branded as losers. That kills creativity. That's quick kills engagement. So what the German Piratenbatae did was make sure that you have a longer cycle of engagement in the liquid feedback, liquid democracy cycle that makes people feel part of the decision, even though it might ultimately not go their way. And yes, in this case, you do need some sort of probably vote because I don't think you'll come to a consensus. A consensus is obviously the most optimal route. And if you can discuss it for long enough so that 90% agree, then that would be the best. But you don't many times you don't you don't get there. Well maybe one more personal impression of your observation of the development of the German Piratenbatae, just a few words maybe about the differences between the difference and development from you know from when it started to now on what you did observe. One of the okay, so as a final observation of differences between Swedish and German Piratenbatae and the development of the German Piratenbatae. There were two things that stood out that Germany did different from other power parties. The first was that the first Bundesparteitag, the German Piratenbatae realized that they were going to be around for a very long time. So they laid a fund, they took time laying a foundation that enabled that. That's paying off now. The other power parties, I could take my own, the Swedish power party as an example, we said, oh, there's an election in eight months. Let's do this. Let's get into parliament. That will be fun. I mean, we're working on internet time, right? Eight months away. Hey, we were used to changing the world in a weekend. The second thing the Piratenbatae did was realize there was a debate between supporters of sticking to the core platform versus broadening the scope of the policy in 2010. It was called Kernis versus Follis, I think, as in full program. And every movement before us has gone this way. You can go from protesting an issue to having a narrow platform that resorts what you're protesting to having an ideology. The workers movement went from protesting exploitation to allowing labor unions to having solidarity as an ideology that radiated across society. The Greens went from protesting pollution to wanting to regulate industries to having sustainability as an ideology. And while we were busy in Sweden having an election campaign, the German Piratenbatae went ahead with this last step and started understanding where we really come from. That we are not really just protesting that our civil liberties are being sold to the highest corporate bidder, but that we are something deeper. We are something akin to a lifestyle party for the connected lifestyle with all the implication that brings. So those are the two key things I observe that I'd use to explain why the German Piratenbatae is a little bit ahead of the curve and is enjoying tremendous successes right now. Ahead of the curve in terms of maturity. No more questions? Okay, then I just got a question that times up. So again, thank you all for your attention. If you want to grab me for more questions, drinks or bribes or just endless praise, then I'll be just outside. Thank you.
Rick Falkvinge, swedish IT-entrepeneur and founder of the swedish Pirate Party talks about how to apply open source collaboration in order to change the ways of policy in the world.
10.5446/21022 (DOI)
This work is, as you said, it's a collaboration between individuals at seven different companies and institutions. Our part at Triple Take was that we built the initial scale model holograms and the display hardware for the proof of concept and invented some of the key optical systems and mathematical analyses which will be used for a third generation hologram that will be actually installed later this year. Our principal author, Stephen Hart from Holorad is actually in New York as we speak to be installing the second generation of the hologram. We're talking today about a full aperture transmission hologram made from multiple slices of data. It shows the distribution within local space of several thousand planets discovered by the NASA over the last year. A few of these might even be habitable. First generation version of the hologram has been installed last November at the American Museum of Natural History in New York. So we have some press reviews on that which is good for us all. And we had a lot of help on the project which I will credit at the end. First we're just going to review your basic classical transmission holography. We start with a laser. We split the beam in two. Expand one beam as an off axis recording. Shine the reference onto the recording medium. And then meanwhile we expand the other beam and shine it on the object of our hologram. So each point of the object scatters light towards our recording medium. And then the light from the different object points all take different paths. And then one page of math later we have recorded our hologram onto the film with relative brightness and the absolute distance at every object point. Now we have, now we just take that same reference beam. We shine it onto our recorded reference pattern which diffracts the light as though it came from the original object points so that we see a reconstruction as the waveform of the light scattered from our original object. Critically because this is a classical hologram rather than a stereogram when we look at the holographic image it triggers all of our psychological depth cues. Accommodation of the retina, I'm sorry, accommodation of the lens of each eye. Motion parallax across the retina of each eye. Convergence of the eyes to swiveling and something to look at. And the stereoscopic disparity between the images of each retina. And because all of these depth cues are working together a hologram can look stunningly realistic. But what if the object is too large or it won't stay still or it is too far away or too dark and by that we mean that it doesn't scatter our laser light. Or maybe it's just data that's representative of an idea or a concept that is not embodied in physically accessible form. And that is in our case what we have is a big chunk of interstellar space is our object. So it definitely is not going to fit in our lab. In our case the answer is that we are going to record what we call a box gram. So in comparison to the normal transmission hologram that we showed you before, we start with the same laser beam reference and film. Sorry, sticky paper. But instead of the object we are going to use a computer driven LCD to laser project the image data. Projected onto a diffusing screen and from every point on the screen the light scatters to our film. Forming a hologram of whatever you choose to project on the screen. Okay, seemed to be a little out of focus here. Oh no, I'm not. Sorry. So now we have a hologram of a slice of data with the screen being the object. And this screen is at the point of the real physical measurable distance from the film. But we can move that screen. So the screen is moved back a little, which I already showed you moving it back a few times there. It projects a different image data and then we record the second hologram at the slightly greater distance. Then we move it forward again for the third hologram and again and again. And in fact we do this tens of hundreds of times and each hologram being recorded across the entire face of the film. Now when we replay the film we see the light from every data point and every slice within the volume which our screen is swept through. And this is in reality what looks like on the table. You can see there's your reference beam coming in here and the screen here has a slide so that we can move that screen and project each of the individual data slices towards the film which is hiding back there. Our data usually looks something like this. What a lot of you are familiar with, Voxel, and one of the things that made this very famous was the big issue where we got medical imaging and we took the CATS-scated dating and we've saved a lot of lives and we're very big in the media when we actually, when a hologram saved the lives of the conjoined twins because the hologram showed a very delicate vein just where the surgeons were about to cut and so they changed what they were planning on doing and the twins survived and that was a very big moment for holography. When we view it, when we're done with the hologram it gets viewed on a dispersion compensated white light display which was originally invented by Kavi Bazargun. So we're still using that today. So last year an astronomer at the American Museum of Natural History in New York City was looking for a way to show some very different kind of data. NASA had launched a satellite called the Kepler to look at habitable planets in the region, in our region of the galaxy. Now Kepler stares endlessly at one small piece in the sky in the direction of Cygnus and it sees in that area about 145,000 stars most of which never do anything particularly interesting. But once in a while it sees a very slight dimming when a planet passes in front of a star. This is just like last month's transit of Venus that you all probably were watching where another planet passes between the star in this case it was the sun and us here on Earth so we saw Venus go across the sun. If this happens regularly every few months then it's easy to conclude that the planet orbits a star with that period. Kepler has already found about 2,000 candidate planets including hundreds which are earth-sized and about 50 which are in this so-called Goldilocks zone where it's not too cold so the water would freeze and not too hot where it would boil off. It's just right assuming of course that life needs water. NASA has now estimated that about 5.4% of all stars have earth-sized planets and in fact there should be about 30,000 habitable planets within 3,000 light years of Earth and about 2 billion in total in our galaxy which is of course only one of many billions of galaxies. And probably within the next year Kepler will find a nice solid earth-sized habitable planet with a few thousand light years of hours. So we undertook to make a 2 meter square hologram showing where these planets are. It turns out that's not so easy. The Kepler project published a huge amount of data but we wanted a simple hologram that just showed those star fields in XYZ space. Now Kepler's focal plane was tiled with 42 large CCDs with huge and varying gaps between them. So first we wrote a special program to remove the gaps. So the distribution of stars in our hologram isn't really scientifically accurate because we had to kind of push them in to move the gaps but the museum wanted visitors to concentrate on the sheer density of stars without getting bogged down in technical details like that. And actually it turns out also that Kepler doesn't measure the distances of its stars. So we had to estimate that using a well-known visual distance modulation equation in order to assume certain color temperatures. But fortunately our team includes an astronomer and all of this is even then is still very approximate. So again the holograms accurately illustrates the idea or the sense of the hologram and all the planets and the stars in their space without being entirely scientifically accurate. And the distance math you see here isn't really all that complicated. The formula depends on the brightness of magnitude of each star which Kepler does measure very accurately because the transiting planet only blocks a small fraction of the star's light and it depends on the star's radius which is derived very accurately from the Kepler's timing software. And it depends also on the star's temperature which to be honest is kind of a guess based on the color which Kepler sort of was measuring. So the details of the hologram, it's over 40 square feet which we think may make it the largest currently installed hologram but it's actually made up of 38 tiles or 40, 42 if you count the blank corners from Kepler's CCDs where they didn't have all the data on the corners. The stars themselves extend through a sweep of 26 inches or to put it another way about 16,000 light years. So in that sense we really think we have the world's biggest hologram. This distance axis is about 25 light years per millimeter so when you sort of steal towards the hologram you're moving at a scale of about 25,000 times the speed of light. So that's warp 9.9 when you do your little holography dance. And the front of the star field is actually about 42 inches projected towards the audience so the museum visitors can actually step into the hologram and have that experience of being among the stars. Having made the hologram we also now need to provide for its replay. So we used a gutted laser TV for the replay light source because as you can see by the power there that it provides quite a big bang for the buck. The green light is actually a custom Mitsubishi built frequency double diode pumped yag but it didn't have a lot of coherence length so it wasn't what we actually used to record the holograms for that. We used a Verdi 5G from Coherent which is a much more reliable and beautiful scientific laser. The red is a modified DVD burner diode and the blue is a modified Blu-ray burner both of which Mitsubishi are experts at making. So this was a fabulous display unit all compact. The additional thing is that we managed to retain most of the TV's input and output image formation parts so that we could generate a select color for a way of generating the multiple beams to create the display. I'm sorry. Yeah, we retained pretty much all of the TV's input and image forming parts so that we could generate and select the color of multiple beams in our display just by feeding a pick off mirror with the DVD of pre-generated color fields. So there's no moving parts. We just had the DVD of all the colors and that when it went through the TV engine automatically gave us the colors that referenced the hologram. This is kind of a funky slide there with many generations of our equipment that we put together for it and that in itself was a fun project. But for the first generation which was installed last November was only about half as deep as our final thing is going to be. It only had half as many stars and it left the CCD gaps still in place and that is a single channel green hologram with a static image of the star field and that's what's up there up until this week. The second generation is to be in place by Independence Day and that's the one that I described two minutes ago and in this generation in addition to the star field it can crossfade with little aurories of half a dozen of the most interesting planets that were found there so that there's a little storytelling going in an interactive display. The following generation which will be later this year should have up to three channels for full color capability and a fourth so that we can highlight extra content. The initial press reviews were actually pretty good. In practice the museum's exhibits are evaluated almost entirely on the New York Times review and they kind of liked the Star Trek inference. The Newark Star Lengen must have had a rubber ruler or something because they overestimated the size of our actual display but that's okay. We rather liked this discussion from the hippies at the Huffington Trippy 3D. Finally, I'd like to thank everyone at the American Museum of Natural History and the following organizations all of which contributed some technical and financial support and I'd be happy to take some questions in as much as I can answer well the parts I was involved in and might have to defer to others for the other bits. How long will it be on display and where can we go see it again? It's the New York Museum of Natural History and actually Steve Hart said that if you contact him he could try to get you a little discount on the tickets to get in there. It's only one exhibit at the end of an otherwise regular mission to the museum. It's supposed to be on display for at least a year and then after that we're hoping it will even travel but that's still undetermined. Joy, thank you for your nice discussion in triple take. Excuse me, I'm losing my voice by the end of the video. Early on when you showed that you were recording the star field and you had multiple exposures and you moved the frosted sheet away, sounds like you had quite a few exposures on that film. Could you give us any details on how you were able to do so many exposures on one piece of film or whatever? They are all done on one piece of film and they have several exposures and in the case when I said that the reduced one had fewer pieces on it and then we're going to fill in more but the Voxel technology that's been used for the CAT scan data has done up to 400 slices and that's one of the things that Steve's really worked a lot of detail on over the years of how to get that many recording things into one piece of film. Okay, thanks. I have another question.
Holorad has produced a very large (1.95×1.95m) transmission hologram for the American Museum of Natural History (AMNH) in New York City, illustrating the distribution of planets discovered by NASA’s Kepler mission. Since its launch in 2009, the Kepler satellite has detected 2,326 candidate planets, including the first “Tatooine” systems with planets orbiting double stars, and the first rocky planets within the “Goldilocks” zone where liquid water can exist. It is now estimated that at least 5.4% of all stars host Earth-size planets. The Museum asked Holorad to produce an immersive glasses-free holographic experience to illustrate Kepler’s findings as the finale of its special exhibition Beyond Planet Earth — The Future of Space Exploration. This hologram displays a real image with visual accommodation, so museum visitors can reach in to “touch” each star, and is full-parallax, so the starfield can be viewed by school groups including adults and children. We use proprietary techniques to produce holograms from sequential exposures of multi-slice data; this capability was originally developed for surgical planning using hundreds of CT and MR slices, and has now been extended to produce holograms from arbitrary three-dimensional data for advertising, entertainment, and education. For the Kepler data we developed software to map sky coordinates into X/Y, with the Z-axis mapped to the estimated distances of each star. For replay in the Museum we use an enclosed folded optical path, with the light-engine from a laser-television. The hologram is assembled from multiple abutting “tiles” laminated on to a large acrylic sheet, sandwiched with light control film for eye-safety and to conceal the illumination optics.
10.5446/21023 (DOI)
As a matter of fact, I don't have much to say because Hans has already covered my subject this whole time. But maybe this will give me the opportunity to go through my slides a little bit faster. Actually the presentation will be joined by Andreas and myself. So Andreas, because he cannot control when he starts talking, then I'll do most of the talking anyhow. Now let me wet your appetite with a little bit of Greek, well, a little bit of Mediterranean diet, because clearly you need some food for thought since yesterday's discussions. And of course if you ask a Greek what's in the name, the answer will be everything. As you know from this stereotype of the film there. But that's not only true because all of you have been tasting this so-called Greek yogurt out there, which incidentally is not Greek, it is produced here in the United States by a Turkish, very ingenious businessman who decided to call it Greek yogurt. But this shows that everything is in the name. So coming back to yesterday's discussion about technology, techniques and so on, I was explaining to Sandra that for Greeks the word techni is the word for art. So the problem has already been solved for us for a few thousand years now. Now how we approach holography, we don't approach it as you will see from our presentation in the standard way. We want to move science, and that is holographic science, in fine arts. And this is why this is the subject of our discussion today, applications in the documentation of Greek culture heritage. And this will be done mostly by myself as I explained, but also by the man who was, creations you have witnessed yesterday at the museum. Andreas is a physicist, holography. Occasionally he thinks he's an artist. But definitely everything happens with his hands in ingenuity. He's also a software man. He's behind most of our developments in Greece. Now I will not bore you with a background. The only thing I want to say is that when Kavek for instance asked me the other day, hey guys, you seem to be quite active. How come I haven't heard of you? I said you haven't heard from us or of us. A, because you are not in the commercial side of the business, and this is our commercial applications under a different name, but also because we had nothing interesting to say. Now that we think we have something to say, we're here. So we date back in 88. You can see this, some of the past activities, but what I will focus on is some of our European programs. And also the fact that Andreas had the honor to put his signature next to the late Eurydian Isaac, late Pierre Boone, and of course Vladimir Markov in a program back in 93, 94. Now this is some people which we try, well, I don't think you can recognize us, but that's me up there. That's Andreas here, and that's the third person who is not present with us. And this is when we, back in 92, we were involved in the first digital holograms with ATPCs at the time. Now that's old news. This comes to modern news, and if in 2009 you asked us, well, would you put your money in holography in Greece, I would say maybe. Crisis had already started, but still Greece, even though it is in the name, Hellas, the official name, the land of light, we still have a primary surplus of nothing else but art and culture. So the answer is maybe yes. This is why we started our program called Holog Kultura, but interestingly enough, in Greek culture we have two components. One is the ancient Greek, of course, and that's a history in images, and this is from a poster in the Cycladic Museum because all ancient Greece is depicted in pictures, is a pictorial art society. Same thing happens with the same component, the second component of our culture, Christianity. Orthodox Christianity, of course, icons. One is the word for image. So we couldn't do without this, and this is why we embarked with this brief in 2009 to enter it. If we want to go out in the museums, we need a little bit of everything. We need to develop digital holography or apply it. We need to have analog holography, and of course we need to have illumination, proper illumination for both, or corrective illumination for both. So three slides about digital efforts. We decided to set up our own studio with traveling cameras, everything mobile so that we could take it out. So we built our own system for a linear traveling camera rail, three times 1.5, because sometimes you need 1.5 meter width, sometimes you need 4.5, and all this must be mobile. We developed all the other different setups for different geometries, as you will see. This station, software written by Andreas, so now we have this studio. It is mobile, but still we need a place to work, and that's where we're having it in the city center of Athens. The target is multi-perspective capture devices for stereoscopic uses, 3D modeling, analog lift, litigalars, 3D projection, and of course holographic printing with the GeoLa or whoever else wants to print our holograms. And here you can see some of our setup. This is the rail, this is the radial, and our pipeline action. All geometries possible. Now let's come to the crust of our presentation, and that's analog holograms. Needless to speak about this, because you'll know about what has happened in the last couple of decades, or three decades. And these are courtesy of Hans, we have this little slide about what were the prerequisites, that's where we started. Hans has proven the principle, not only will he see the crust full-color emulsion, but also the fact that a museum of virtual artifacts, a mobile museum of virtual artifacts, is feasible. So we said, okay, that's point number one. Obviously, you have seen this slide before. And if you don't, if you don't, not only he proved that his ultimate holographic plays work, but also he proved the principle of a portable laser camera full-color. And of course, color holographics of whom you have heard before, they prove that this is commercial, can be commercialized. So that's our brief of that. We decided to go into the garage. That's a basement somewhere in near Athens. And there we decided to build together with our friends the mobile camera. We call it Z3 RGB. And originally, the concept is that museums will never give the artifacts, or will hardly ever give their artifacts out of their collections. We need to take the camera in the museum hall. And there, the conditions are adverse. So if you visit our garage lab, you will find really adverse conditions, especially if you want to visit the toilet. So this is, and of course, we focused in the initial holograms because we think this is the easiest way to make holograms in situ. Now we make, we want to make holograms, but what do we compare it? Do we compare it with clowns, butterflies, beautiful holograms? But our culture does not necessarily involve this in our museums. So we commissioned few images with Ibshante. Things like this, which are closer to our culture. And also we commissioned some images with color holographics, which is also closer to our culture. Colors is also very important, maybe for a clown or for butterflies. But when you come to these kind of items, you really need to be able to represent white and black. So now we know what we can achieve. And this is a hologram by Ibshante. This is the object. This is the hologram. And this is a hologram that we did with color holographics. This is the object under dispersed light, object under white light, white laser light. And this is a photograph of the hologram. So now we know where we have to go. And then Andreas, he decided how we specify our camera. So our main idea was that we should select some wavelengths that would fit to some LEDs for later illuminating the object. So we got a blue laser at 457, a yellow one at 532, and one laser, red one at 638. Those two are DPSS lasers, the first two. The other one is a diode laser. So we also made a small investigation about the wavelengths available on LEDs. So we wanted from the beginning to match the lasers to the LEDs. So this is a spectra of our lasers. And these are the preliminary results back about a year ago. This is, I think, with the Russian material. And that's with the Russian material. This is with Yves Zantais material. We tried to test. And these preliminary results moved into a little bit into the difficult part. This is a hologram by Yves Zantais. Well, a test hologram, should I say, because it was not released to us, because that white is not white. So we had to do a little bit better than that, because that's a prerequisite for museums having white-ish objects. And that's what you will see here. Well, unfortunately, angles, et cetera, really matter. But I will try to illuminate this. Sorry for that. My robotic arm doesn't work correctly. But I will try to do this. You can see this and more back there at the end of this presentation. So we know we can get into white. And if we can get into white with these restless, then we need to prove that wood is good. Silver is silver. White is white, even though the Rolex is not necessarily a Rolex in itself. And then you see icons. You will see more icons, and I will explain to you why we do icons. There are results back in October. We wanted to present something at the HoloExpo 2011 in Minsk. So we originated these sample holograms, and Andreas naturally presented these in Minsk. That's a very interesting story, because that's a project that we are entering together with other Eastern European Orthodox Churches partners, because as you can see here in this church in Minsk, this is a hologram. And the best of my knowledge, this is the first time that the hologram has been used as a relic, has been used as a pious object, as you can see here. It's not just for an exhibition to take it around. People go and pray to this. Now we move on to November. Few more icons because of that project. And then we decided to submit our application here for the ISDH 2012 exhibition. And we originated, we found this object, and we originated what you saw yesterday. This is the test hologram, which we also presented at the Holopack in Vegas last year. Now results, this is how the camera should have looked, but we don't think that for a one-of-a-kind mobile camera, this should look like this. So we involved some industrial designer who was awarded, in fact, for making the Olympic torch in Athens, and that's how the camera is going to look. And we moved a little bit more into the production of this. Here you can see the model. It's a handmade camera, wooden model, one by one meter. Well, not one by one, ten thousand. It's less. There's a type of there. And this is how the camera looks as of 22nd of June. What will go in there is this. Okay. So we have a schematic of the camera here. As you see, there is a breadboard on which we have mounted three lasers, as usually. We just mixed the beams. And what is novel about this setup is that we use a lot of electronics to control the lasers because as you might also know, some of the lasers get unstable, usually. So we have a problem, especially with the red one with this just a dial laser. So we had to build some special electronics and feedback mechanisms to keep it stable. So we also fitted a fabric per o interferometer. It's a scanning interferometer. So in order to have always a monitor monitor, the beam, see if the beams are clear, if we have more hopes and think later with all these paraphernalia. All these electronics we got. We have now a very stable camera. We have that is tested under adverse, really adverse conditions, humidity variations, temperature variations. So it's we can make holograms. No more hopes. Everything is under control. So we developed a control software in which we have this is the monitoring of the beams. We have temperature real time, yes, temperature control humidity. We always we make a take the data of the environment and the circumstances under which we write an hologram. So we have humidity temperature, a temperature inside the camera temperature outside of the camera. And so we have a pure record of what's happening. And okay, this is the mobile lab. And this is the usual setup. We usually we have a mirror so as to adjust the angles easily. We also have a mobile self contained duck room. Though the camera is also adjustable. I mean, the legs are allowed for full tilt and yo and things like that. This is the camera. Well, we couldn't bring it here because it's not finished yet and be it would be too heavy. So what we did, we did a visual reproduction of this in a hologram. We have a printed hologram back there for you to see at the end. So that's a geolab printed hologram how the setup is going to work. Now then we got into the nitty gritty of real business. We have the object, but we noticed that on the different lighting conditions naturally this this replace differently. That's the object simply because this material is very reflective in in on the different angles, let alone different wavelengths. So we decided to start measuring. So we just got our new spectra radiometer with which we can make some spectra radiometric measurements. And now in this case, sorry, right? Okay, go on. Now, this is a common spot lamp. This is one of RGB led systems. So we can see the difference between the spectra emitted by the LED or the RGB. Now this is a spectra from common halogen. And this is the CIA diagram with a white spot here. This is what we get from an hologram. Under halogen spot light. Under this light, this is the replay of the hologram. So you can see here that we have some spurious frequencies which would demo. I have more noise in the hologram anyway. So now this is the spectra of our RGB system. And this is the spectra we get from the hologram. So it's much clear. And this is a real hologram under halogen illumination. You can see here the spurious. This is the same hologram under our RGB LED. And you can see how much clearer is the spectra from the open. Also you can see here we have photographed the same, the object from various angles to see if we have this purse and blur. So then it looks very, this means also here we have the same result. So RGB LED is the same on the icon. And that's when we got invited to come and exhibit here. So we had to do a little bit better than that. So we decided to bring it in a box. Because we there show the original artifact object and the hologram next to each other or back to back. So this is the object side. This is the front hologram side. Now how do we illuminate it? This is our holofoss. We have to hurry up a bit, but that's an early announcement here in this conference. We have a rapid communication for some scientific magazines for this. That's the use of what we are doing. That's an intelligent illuminant in Wi-Fi communication, DMX protocol for with IP addresses. Traditionally, this is the normal, sorry, this is the normal profile of three angularly displaced beams. As you can see, and Andreas, I have to hurry up. You can read it in the proceedings. This footprint is not optimal. So we moved into the first generation. And that's an RGB white of which Hans has spoken about. This is what we get from this. Again, good, but not as good as we would have liked. So we moved into the second incarnation of this by using a trichloric prism. These are the LEDs. That's a method used in projectors and used in the white light machines. Again better, but very expensive. So we decided to move to a slightly more commercial idea. And that's by using, that's the holofoss, by using the trichloric filters. So that's the layout. You can see it back there and you can see it in the exhibition. Now we get much better. This is almost correct. This is the way we want it up to illuminate 30 by 40 hologram and also it's tunable. The mixing is tunable. Now we think we have an optical virtual artifact. You can confirm it or not by visiting the museum. And these are the conclusions very, very fast because I want to get into the epilogue. Okay. We think that we succeeded in making a very reliable mobile camera to bring into the museums. And we have, we believe that by very careful spectral radiometric measurements on the emulsion and on the spectra emitted by the object to be holographed, we can make some very, very good quality true color RGB holograms anyway. And with only three laser lines. That's what you want to say. That's what I say. We believe that even with the three lines, you can get very good results. Of course, it seems out of investigation of like hands that we need more. But this is our, what we are going to investigate further soon. And we are. At the date. We believe that Holoforce provides, or not we believe it's true that it provides a very narrow band and narrow band with illumination for the holograms. But we think that we have to enhance the power of the Holoforce with a much more powerful LEDs, which is our next step. Now also we need to have a little bit more quantitative investigation and research with the ultimate eight or other emotions. And as an epilogue. That's a different approach to holography for museums. We do not think that holography for museums is just making nice visual artifacts. We think that it should come as a holistic approach because museums are used to this approach. So we want holography to be one, or if not the main medium for recording. We call it hyper documentation of cultural heritage artworks. And this is where it gets interesting because we partner with this very well known institution of electronic structure and laser down in Crete. Maybe remember this photograph. Maybe some of you will see it in three years time if the consensus is that you are all invited down there. Now these people, and that's where we are. These are holograms I have here you will see for us. So a camera is part of, is integrated in the equipment that you will see in the next slides. And that's for the holographic recording. These people they have developed and they have multi-spectral imaging systems. More induced breakdown spectral analysis, all of them dedicated to conservation of art. Holographic digital spec litepherometric for structural diagnosis inside the material. And of course laser cleaning. That's an interesting project. If you visit Guggenheim page you will see this black painting by Reinhardt which was taken down to Crete for simultaneous spectral imaging during laser cleaning. Black on black on black and varnishes and over paintings had to be cleaned. That's the only way that this could have happened. Have a look at the internet on this project. And today of course these people we are cleaning at Acropolis the Kariatids live with the only combined laser machine in the world which combines UV and infrared lasers. Thank you very much. Do we have any short questions? Anyhow we can have more questions if we have some time in between other later today. Anyhow I want to thank both of you very much for this interesting presentation. I also have a question for you. I think people here would like to know would they be able to buy your LED lights in the future. Maybe you want to mention that. Sorry I have to come up here as I have no microphone on me. Yes we have produced about a couple of dozen of these holofoce lights. Primarily to be used for our own purposes. Once we are comfortable enough and we are already involving the same industrial designer. Once we are comfortable that everything is working. Yes it will be made available. So the answer is yes but in the medium term.
In this paper we will present the Z-Lab transportable color holography system, the HoloPhos illuminator and results of actual in situ recording of color Denisyuk holograms of artifacts on panchromatic silver halide emulsions. Z-lab and HoloPhos were developed to meet identified prerequisites of holographic recording of artifacts: a) in situ recording b) a high degree of detail and color reproduction c) a low degree of image distortions. The Z-Lab consists of the Z-3 camera, its accessories and a mobile darkroom. The Z-3 camera is a computer controlled opto mechanical device capable of exposing selected, commercially available, panchromatic silver halide emulsions to the combined red, green and blue laser beams at sufficient energy levels. Z-3 accessories include a vibration isolation platform and custom plate holders in the object’s space. The mobile darkroom is autonomous and environmentally friendly with closed circuits for chemical waste management. HoloPhos is an RGB LED based lighting device for the display of color holograms. The device is capable of digitally controlled intensity mixing and provides a beam of uniform color cross section. The small footprint and emission characteristics of the device LEDs result in a quasi point, narrow bandwidth, source at selected wavelengths. A case study in recording and displaying Greek cultural heritage artifacts with the aforementioned systems will also be presented.
10.5446/21024 (DOI)
So I would like to talk about aberrations in holography. And I start off with aberrations of lenses to give an introduction to aberrations in general. And then I go on to the analog model of the, I go on to a holographic model of aberrations and talk about aberrations in holography. And I derive some equations to show what the aberrations are. And then we get some implications from those equations. And then there is an experimental demonstration of the aberrations. So in a lens, in a perfect lens system, you have an image, an object plane here and an image plane here. And a perfect system, every ray from the object, goes to its corresponding point in the image. In general, this is not true. In general, the paraxial rays very close to the axis will do this. The extreme and the marginal rays will generally be aberrated. So an aberration in an imaging system is defined as any aberration of an image due to imperfections in the imaging system, which are inbuilt into the system itself. In standard optics, these occur for non-axial object points, points that are slightly above the axis of the lens. And for these extreme arrays coming off of these things, note that if you have a lens and an axis and a point on the axis and the lens is twisted, the point has now gone off axis. And so a twisted lens gives the same set of aberrations for a non-axial object. So the five standard aberrations are known as the Seidel aberrations. These are the spherical aberrations, the coma, the astigmatism, Petzvall or field curvature, and distortion. Spherical aberration occurs when the paraxial rays focus onto what's called the Gaussian focal point here. The marginal rays focus at a different point that they may either focus closer or they may focus further on. The distance between the Gaussian focus and the extremal rays, the lateral spherical aberration, the distance between the marginal focus and the Gaussian focus is the longitudinal spherical aberration. Coma occurs because different segments of the lens give rise to circles. This part of the lens will take an image point, an object point off axis, and create a circle that is slightly higher than the Gaussian focus. As you get nearer and nearer the center of the lens, the corresponding circles become smaller and smaller until at the center of the lens, the circle has a radius of 0 and becomes a focal point at the Gaussian focus so that an off axis point here, the marginal rays will cause an image that is blurred in the shape of a coma, hence, the shape of a comet, hence coma. With the stigmatism, as you go further off axis, you have two different foci. The light going in a plane that contains the object and the optic axis, the transverse plane creates an image there of a line. The plane across the lens creates a sagittal focus. So a point gives rise to two line images at right angles to each other and at different positions along the principal ray from each other. In field curvature, a plane or object here gives rise to a curved image. This is because there is a line, an imaginary line, inside a lens that is considered to be normal to the axis, at which the refraction begins to occur. However, as the lens gets larger and larger, this is no longer a line. This is a slight curve. And so the image itself becomes curved. Towards the paraxial region, a small straight line will give rise to a small straight line as the object becomes larger. The curvature becomes larger. And distortion is because the magnification, the diagonal magnifications are different from the magnifications are perpendicular and along the axis. So you have thin cushion distortion here and barrel distortion here, where the magnification is less. So in the wave picture, the way we describe these is that if you have a Gaussian focus here, then you have spherical circular light focusing into that point. And this would be an ideal focus. But a real wave front does not go along the ideal spherical wave front except in the paraxial region. The further away you go from the axis, the greater is the difference. And that difference is denoted by W, which is a measure of the aberration. And W can then be given by the Seidel aberration equation. Each term in this equation represents a particular aberration. The first three terms are spherical aberration, coma, and astigmatism. And these cause a blurring of the image. The next two terms, the field curvature and the distortion, cause a distortion of plane-hour objects. So in the holographic model, in order to show a correlation between holography and conventional lenses, it's a, it's necessary to create an effective focal length through the lens. And to create, and to do this, we need to determine an equivalent focal length of a hologram. This could be done if the reconstructed hologram gives rise to a spherical wave front and images to a single point. So for holography, we have a point object, which when reconstructed gives rise to a point image. And once again, the point object gives rise to spherical wave fronts, which the hologram converts to an image of spherical wave fronts. And so the difference between the actual wave front given by the hologram and the ideal wave front of the hologram should be given is once again the aberration. So the program to carry out this task was first initiated by Meyer and then followed through by Champagne. We set up a coordinate system in which the hologram is in the xy plane with z equals 0. We have three points, O for the object point here, R for the reference point, which makes the hologram. And then we reconstruct that from a point C, which is not the same as the point R. So these three points have associated coordinates, x, r, y, r, z, r, x, y, o, z, o, and x, c, y, c, z, c. And we start off by assuming that we also assume that the reconstruction wavelength is not the same as the recording wavelength. So we create the number mu to reflect this. Mu is the ratio of the reconstruction wavelength to the recording wavelength. We then derive the wave front on reconstruction and determine the difference between the reconstructed waveform and the actual waveform. And that difference will be the side elaboration. The derivation starts by calculating the phase of the actual wave front coming off the hologram. The wave front that comes off the hologram is a combination of the reconstruction wave, the reference wave, and the original reference and the original object. So what we'd like to do is we'd like to say that if there is a circle, a sphere that comes off that point, and wherever the plate is, that point of light will create a specific phase at the plate. And that phase, due to some general point, g, at some where the plate is at x and y, that point will create a phase of that. So what we then do is to substitute that general circular phase into each of these three terms and create the actual wave that comes off the hologram. So when we carry out this analysis, we determine where the image point would be from the actual wave that comes off the hologram. And we get these two terms. We get the image point at capital Xi. And there's a capital Yi, which is the same as the x's, except you substitute for y for x. And capital Zi. So these are the actual image points that come off the hologram, which may or may not be the original points. Also, there is, as a slight test of this, if the point C, the reconstruction point, is the same as the reference point, then capital Xi should be the same as capital as small xo. The image point should be the same as the original object point, provided that the wavelength change is one. There is no change in wavelength. And this does do that. So there is a traditional check on this that you recover the object point from the image point if the reconstruction is the same as the reference. So in order to create this lens formulation, this lens equivalence, you can then create the object point here, the image point here. And you create an equivalent lens focal length f, which from those previous equations comes to this. You compare that against the standard focal length you compare that against the standard lens equations. And champagne uses the same formulation. Champagne gets this. And you can see that this formulation, champagne, is the same as this formulation, Maya. Champagne was basically concerned with nonperaxial imaging, but he gets a very similar formulation for the distance of the object point from the hologram. So the resolution of a hologram is not the same as the resolution of the holographic image. In that sense, the holographic image is a diffracted light field. And so to find the resolution of the image, we need to determine the resolution of the image in terms of the size of the source to do this properly. You need to take the van Ziettzenek theorem, and you need to determine what the spatial coherence of the source is with respect to the plate. But we can do a simple back of the envelope calculation with what we've already got, because the rate of change of the image point with reference to the rate of change of the source size gives us this for the longitudinal and lateral resolution. So there's almost a quick and dirty method of getting resolution. So having got those equations, we can then expand to a first order, and we can derive magnifications. There are three different kinds of magnifications. The transverse magnification, which is the height above the axis. The longitudinal magnification, which is the extension along the axis. And the angular magnification, which is the angle subtended by the object as it comes closer and further away. For transverse magnification, three interesting facts occur. First of all, if there is no change in wavelength between the recording and the reconstruction wavelength, if mu equals 1, and the reconstruction distance is the same as the reference distance, in other words, if z0 equals zr, there is no magnification. The magnification is unity. So if you reconstruct with the same wavelength from the same position of the reference, there is no magnification. So this is a quick check on the magnification equations. If the reconstruction beam is collimated, if zc equals infinity, then the only term with mu, that disappears. So there is no magnification of an image by change of wavelength if the reference, if the reconstruction beam is collimated. There is only a change in wavelength for a different wavelength if the reconstruction is diverging or converging, a collimated reference beam will result in unit magnification. There is a factor of 1 over mu difference between conventional magnification and holographic magnification. There is that factor here so that the lens metaphor is only valid insofar as the reconstruction wavelength is the same as the original reference wavelength. So this shows the limitation of the lens metaphor for the hologram. The hologram is equivalent to a lens insofar as there is an equivalence of wavelengths. For longitudinal magnification in conventional imaging, the longitudinal magnification is the square of the lateral magnification. In holography, there is a 1 over mu factor here, which shows that even if you change a wavelength, even if the transverse magnification is unity, there is no change in the transverse magnification, there will always be a change in the size longitudinally. So that if you change the wavelength, the object will always extend further out. Angular magnification is independent of all the parameters of recording and reconstruction and depends only on the wavelength change here. Expanding that phase difference to third order gives the third order Seidel aberrations for holography. Again, the difference between the ideal reference and the real reference and the ideal wave front and the real wave front. So we take the wave front and we take these parameters, the radius of the wave front, the distance of the axis, the azimuthal angle, and we get a set of equations for holographic aberrations. So for a holographic aberration, this is the spherical aberration, the coma, the astigmatism, field curvature, and distortion. For coma, if both ZC and ZRR are infinite, in other words, if both reference and reconstruction beams are collimated, then spherical aberration becomes zero for mu equals 1. Thus, if the hologram is reconstructed with the same wavelength as the original reference and both reference and reconstruction are collimated, spherical aberration disappears. Sorry, that was spherical aberration. For coma, if both reference and reconstruction beam are collimated, then the ratio XC to ZC and XR to ZR are used as tangents of the angle. Because when you're collimating, there's no source for these beams. So you use the angles of the beams. Coma can be made to disappear if both reference and reconstruction beams are collimated and the object is directly in front of the medium. This results in tan theta equals 0. And it also results in the fact that the tan of the reconstruction angle must be modulated by the ratio of the wavelengths. Thus, if C and R beams are collimated, the coma may disappear if the tan of the reconstruction angle is modulated by the ratio of the wavelengths. If both these wavelengths are the same and the reconstruction angle is the conjugate of the recording angle, with both being collimated, coma can be made to disappear. In addition, if the wavelengths are both the same, the condition on the object position vanishes. So if you don't change the wavelength, the object can be anywhere you want it to be. If you do change the wavelength, then there is a condition on the object that it must be in front of the plate. Same conditions occur in astigmatism and in field curvature. However, in terms of the field curvature here, the Petzlweil curvature, if the reconstruction beam is not collimated and or if the ratio of wavelengths is not one, then you will get field curvature. And a planar object will be seen to bulge outward in a spherical manner. So these are experimental results. We used this image. And we recorded the hologram at 514 with the object directly in front of the plate at a distance of about 13 inches. And the reference beam came in at 30 degrees. We reconstructed the hologram under a variety of conditions. We kept the reconstruction beam collimated while varying the angles on either side of the recording angle. We varied the beam divergence and also we varied the angle. So we diverged the beam along the line of the original divergence. And we changed angles on either side. When we changed the direction of the recording beam and kept it collimated, the image moved. And you can see there is a coma. The coma is greater along the direction of motion. When we diverged the beams, the object shrunk and moved in closer to the beam. You see the longitudinal change in magnification there. When we changed the direction of the divergent beams, the object moved along an arc and also distorted in a circular manner. So that over here, the center here is very difficult to focus. But the focusing gets worse as you go out to the side. Because the object, even though you don't see it, there's a curvature to the object here. And there's a whole series of photographs we took which just shows that. We did the same thing with the 633. And we got the same results except that the image got bowed. And so it was impossible to focus the image at any point along the 633 when reconstructed with 633. So the best practice is to always reconstruct a hologram with a source that matches the construction geometry to include avoiding aberrations within the reconstruction force. When the aberrations occur, then understanding the effects will aid in troubleshooting the system. Thank you. Thank you.
The Seidel aberrations are described as they apply to holography. Methods of recognition of an aberrated holographic reconstruction are described, as well as a recognition of the type of aberration. Experimental and theoretical strategies to minmise aberations are discussed, including geometric considerations in the recording and reconstruction of holograms. Aberrations due to the recording of a hologram with one wavelength and reconstructing it in another is also examined.
10.5446/21026 (DOI)
Hello, Sveta, can you wait for me now? Before I present these paper cross slides for color two, I want to do a kind of little introduction about my concerns and remember some words, some sentence, not very new for all people, but I think it's good to rethink that sentence about who is essential to understanding contemporary art relationship between art science and technology, to remember that technological innovation is only important in art if it implies new relationship, new ideas, new utilizations leading to a new consciousness. Of course, people working with artistic purpose, we only use the technology when we need or we feel we need to change something or in our process of working, not the technology in itself, of course, but like a medium, like a tool to pursue, to go further in your ideas and materialize ideas. It does not mean that traditional ways of making art has been exhausted. Today, we live in a time of synchronization where we have a mix of universal language and trans-disciplinary multimedia offering new ways of appreciation, trying to gather different cultures. The characteristics of images produced by electronic means are different from earlier images because they apply for the presence of the observer, its physical and psychological perception to carry out the sensory experience. This is, I think, in my case, in my personal view and my personal work and my personal research is fundamental because my background is from fine arts and when I finish my graduation, I have a friend from the sound from Kaloskull Banking to apply to research, to have the opportunity to use holography in my kind of artistic work. For me, I'm very attracted with the white light transmission hologram. I saw the first hologram like that, I spent a lot of time looking at that feeling, I'm going in that kind of image. The feeling was so strong that I thought there are something special in that applied to have the color, the light color in your purity. In the beginning, I'm not attracted exactly for the specialty in 3D image, but the strongest of the light color because I'm doing, within my paintings, I'm using acrylics, big paintings with acrylic hanks and suddenly when I saw that, that is what I need to use to develop my artistic work. But pursuing the work and producing and learning how to do some kind of holograms, I found when I shoot that for people, the way to look at them was very strange and looking, doing like the Germany, I don't remember the name, with the children, as speaking the people are in front of doing something, things with the body and I found that the necessity to explore that kind of way, how we use the body to see around the world. I concept that kind of work like a pilgrimage, like a walk in the light. It's looking outside, looking with part of body going into the piece and walking inside the piece. This one and this one have holograms inside. Other, another hologram is a telescope from a physical department. I change inside, like paint inside blue, I put a light blue. I change the lens, the location of the lens. Usually when we look at the microscope, the part is open and you can see the stars with the lens. But I put the lens in that part and we look into your eye and when that is open, it's open that part of the top. We have an hologram of high and high and they have a little mechanism with music inside, like a children memories, doing things like that. In that piece, I use the same hologram, the same image, but with a different position of lens in the reconstructions and the difference they have with small chance and you have different image like the little sculptural piece. They are exactly the same shape but because I'm talking some movements, they seem different. In that piece, I have, I put, before I finish the piece, I do a movie with myself into the piece and go into and go. I project in a red projection in that part. When people go into the piece, someone is coming to see who is in the piece inside. In the other, I put a mirror and when people are there with blue environments, this piece has sounds, sounds from John Cage about birth, about sounds of the body, to have some kind of environment and to be special to the way, to look at. And this one, we have the hologram box in the top and this one, you have only a projection laser because the objective of the aim of this work is to see, that light is to give people to think that light could be the object of art in itself. We need light to see things but light could be in itself the object of art, in that case a line, in that case a laser line. Well when I pass from that kind of work to use digital holograms, I have the opportunity with a grant from Portugal to work, to develop a work in the Montfort University working with Martin B. Sharpton. And we do a series of digital holograms, Alan Ticklers. They are filmed with a digital camera and printed in Geova with Tvania and measuring 16 for 50 meters. That kind of, the name of this work is changing thoughts and that kind of the viewer can see some kind of things and then the chance little when in final appreciation. I use myself looking performance in each of the piece because I want to see, to have the opportunity to look how I'm inside the work, myself inside the work and in the hand I'm the viewer of the work and understand how that's in my mind, in my kind of research that can result. For example, perhaps I prefer to do that. Because so I can do that, but I prefer to do for one side and another side and not only for the right. I choose to do that in white and black movie. But because the chemical processing in the hands, we have some green colors and some magenta colors. I choose a kind of look from 19th century because I want to see the contrast in the new nowadays technology and the kind of not so nowadays kind of looking. In this piece, it's very important to see the problem if the hologram is not very well lighting. Very strong because the digital organ need to have very strong lighting and the shapes of the arms don't seem so 3D, smaller flats with a good light. The arms are very, very normal arms, more 3D arms. Well after that work I feel the opportunity to pursue because my grant is renewing and I'm interested to research now with the colors, using colors, digital holograms, printing color, full color and creating another series across light through color one. And at some time comparing using the camera from G.O.A. The Canon is free with G.O.A. and the camera and Sony not prepared for G.O.A. and when it's running it's not looking at the same time but four points. And in final we have different kind of pictures in which. Sorry, in that kind for a reason. Because the frames are not so equal it's more difficult to work in final cut and the sequence so equal than using the Canon. And that kind of look more red. Use the red light of course but it's very red, the movies from Sony and I prefer, in my opinion, I prefer the kind of picture from Canon. The same kind of performance was in different cameras, previous Canon is one Sony and during the process the color is changed in the beginning, this is real color you have in the studio, in that hologram. Not this one but the first one. You have the blue and the, you see, that is what like the environment you have in the studio and this one, the Sony camera is changing colors, first movie is blue and second is going to green because that we choose in next series of hologram to use a white support and using the camera in the different collocation with a mirror, 45 degrees and creating another kind of environment, another kind of space. And using the white support, I have that hologram in digital printing from Sony but I do many movies with the Sony camera and with the Panasonic camera but I don't print that because it's very expensive to print but the colors are all time changing, not so equal than you have with the Canon camera from Jolla, changing inside. The question is not only you have small cameras that you can use in the rail but the question is you have the camera with change made by Jolla because the sequence is very, very, very similar. Now because this is a work in progress, I'm pursuing that kind of space, creating more deep space, more shadows with objects you have in space, objects not in space in the frame but creating and crossing, crossing in the floor, another kind of the shapes and to also to have static things and moving things at same times and I'm working and this session how improve that kind of pictures. In the kind of conclusions, I have the point of view. I will speak about technological is very interesting and I like very much to work with that kind of building things in the studio and it's very nice to do works with different cameras or comparing the different environments, different colors, different shapes I can have but the difficulty is in the hand to digital printing. It's so expensive and you need to do more experiments all the time and if you do print all the time for experiments you don't have money to pursue any kind of work and you need to have all the time in your mind when you are creating that kind of work. The kind of moving you need to do is very slowly, the colors in the environment in the studio when you are working need to be very, very strong and clear because the tendency to digital printing is to be very dark and when you put that kind of programs in exhibitions need to have a very strong lighting because you don't have the real image if you have a few light and so on there. It's okay. I don't remember nothing at the moment. If you have some suggestions then I can I can I say do something more. Go ahead finish. Thank you. Any questions? Could you come? Yeah. Yeah it seems to be a lot of technical problems like with the camera to solve. Do you would agree with me when I hear your whole project that it's to make art in holography has something to do with how much budget you have or and not related to your creation or what would you say is the conclusion? Can you say again second part? I said would you agree that the holograms I mean we saw Chris Levine made some wonderful work that holography is a matter of having the right budget then just the artist the artistic content. It seems to be a lot of solvent of technical problems like having the right camera. Lot of the registration things is known. You can register it if you make 3D movie movie like the avatar and you have the budget you have no problem with registration. So yeah it's everything has to do with money. The technical problem are solved. So this is the major restriction isn't it? Yes of course I can do that work because I have a grant. If I don't have a grant it's impossible for me to work because we don't sell the holograms. The March marketing about holography is very short in artists. I'm speaking from the point of view now of the artist. The market is very small. The exhibitions about holography I'm happy they are a lot of people sell that. I think they are a confusion. I don't know if there are people in Belarus in the OLO expo conference. I don't know if people assist from the paper from Paul John and she says a very interesting thing very important. That thing is there are confusion about this play holography and artistic holography and that kind of artist work to the market. And this play holography is the market of art is different from the market of the merchandising from the market of the market products for sale things about Nestle for example about the pharmaceutical industry. The market of art there are collectors it's okay collectors work collectors and the market of art is related with museums and the museums not all the time are that depends of course the curator in the museum not all not only of the curator but that depends if the is the open mind for new things. Every kind of art when they appear different codes different rules and not all the time when they exist in the contemporary time is the set for the social system and that spent time to be a set and some kinds of next generation I think in 20 years holography perhaps will be very easy to be a set very easy to sell it's not very very old. All of the artist from 68 years old photography perhaps spend more time to be a set like a kind of. Excuse me but this is a great conversation for the follow up through the week okay. I don't mean to interrupt just. Thank you. Yeah.
This paper intends to be a reflection on the end product of the process of preparing movies to digital art holograms, comparing the kind of space and movement between those images and the ones in the previous paper. This paper explores too questions about the act of seeing through that images and it is concerned with the surrounding debate of ideas for new experimental methodologies applied to holographic images.
10.5446/21036 (DOI)
Good afternoon, everyone. I'm Kyojima Tsushima, comes from Cancer University, Japan. Of course, is Sumio Nakahara. Professor Sumio Nakahara has already presented publication technique of our programs. That is his part. This is my part. My part is calcification and design of this kind of high-density CH. Now, this is introduction. Our background is resolution of high-density CH being comparable with that in classical holography. Here, what is high-density CH? This is our definition. The pixel pitch is less than one micron. That means the production angle is more than 18 degree. And hologram size is at least more than 5 by 5 square centimeter. That means number of pixels is more than 2.5 billion. And we need full products for digital art. And my objective is to answer this question, how calculate CH in practical competition time and how produce image close to or over-analog holography? And how supplies or entertain everybody by our CGH? This is outline. And the presentation is intended to review our works. So I will not enter into the detail of our techniques. This is a first of our CGH named Venus created in 2009. And this CGH composed of four gigapixel. And only the Venus, this name of this CGH is Venus. Only the Venus attended the last meeting. That means she was hung on the poster. But I was no show, I'm sorry. And competition time was approximately two days in that time. But current competition time is less than two hours approximately 24 times faster. After that, we create a lot of computer programs. Acqua2 is created for testing new hidden surface removal technique. And the Moon is also testing for testing text mapping technique. And the Metal Venus one is a metallic version of the old Venus. And Metal Venus two is also metallic version of old Venus. But she has specular smooth shading. This one is for testing image hologram and reconstruction by white light. This CGH named Bear 2 is maybe most famous in our holograms because the paper for this CGH is now ranked in top download 10 of applied optics of OSL. This here shows the concept and source materials of our computer holography. Multiple source materials used to build 3D scene. Of course, Polygon mesh 3D object is used for 3D scene. And picture mesh 2D manager object is also used for scene. And we are now working on this kind of source materials. Multi-view point image is installed into the 3D scene. And real existing field of real object is also installed, can be installed into 3D scene. Since this capture field are integrated in the virtual 3D scene using field-based digital editing. So this is conceptual illustration of a typical 3D scene of our computer holography. This is the multi-view point image. And this is a capture field in front of the background. And this is model 3D object and photograph illustration. In the beginning, I'd like to talk about our method of Polygon mesh 3D object. Usually, almost all researchers in this field use point-based method to calculate the object field from virtual object. Point-based method is very powerful and flexible method. But that have a big disadvantage. That is, the point-based method is too time-consuming. Polygon-based method is usually faster than the point-based method in rendering subs object. Because the number of polygons composing the subsets subject is much smaller than that of point sources. This here shows the principle of our Polygon-based method. Each polygon has on their surface function. This sub-function is a complex function. This is an example of subs function of polygon 1. And the amplitude distribution gives the shape of polygon. And this distribution plays a role of a diffuser. After numerical calculation, we get the web field from the polygon and combine these into a single object. One advantage of our method is similarity to conventional computer graphics. And so we can easily use the technique in computer graphics to our method. This is an example of texture mapping. The astrophotograph is texture mapped onto the polygon mesh surface. And by using these techniques, I created browsers. This is now exhibited in MIT Museum. The subject of browsers is a live face. The shape of live face are measured by 3D laser scanner. Here we used concomitant BB910. And this is the view. This is a laser scanner. And this laser scanner scans shape by projecting the laser beam. But this laser beam is not so harmful for the human eyes because they are my kids. And this 3D laser scanner outputs the polygon mesh. And I texture map the photograph taken simultaneously to the polygon mesh. And the field object field is calculated by using the high performance computer. And to the professor, Nakahara mentioned, this printing machine is used to fabricating the CGH. And this is the result. This CGH computer hologram is now exhibited in MIT Museum. But we have another version that is exhibited over there. Please check it. Oh, sorry. This hologram is composed of 35 billion pixels. This is very big holograms. And the next topic is digitized holography. This is a computer holography for real acoustic objects. In this technique, the object field is captured by the technique of digital holography. The captured field is digitally edited and mixed with virtual 3D scene. The mixed field is optically reconstructed by computer hologram. We call this technique digitized holography because this technique replace full process of cross-call holography by digital counterparts. But we have two big problems to realize digitized holography. The first is expansion of captured area. That means increase of number of samplings. And the second is reduction of sampling intervals. This here shows compression between high definition CGH and image sensor. This yellow mark shows the size of the image sensor. The captured area of the current image sensor is not sufficient for high definition computer holograms. And the sampling interval of the current captured field is also not sufficient for high definition computer holograms. So we used two techniques for capturing large field at high sampling density. The first is a lensless fully set up for reducing sampling interval. And the second is synthetic aperture digital holography for extending captured area. In this technique, image sensor is installed on the moving stage. And as the sensor moves, a part of the field is captured sequentially. This is a 3D scene of a CGH called named PENGINE. In this 3D scene, two objects, the real-time object of that field is installed, arranged, duplicated for each. And the virtual object is also arranged in the 3D scene. This is an object. And this is an image sensor. Image sensor, as I mentioned, is installed on the moving stage and captures the field part by part. This is a result of synthetic aperture. So this is optical reconstruction. This CGH is also over there. You can check it, not today, tomorrow. And next topic is field-based digital editing of a 3D scene. The 3D scene in our computer holography includes many components. The mutual conclusion among the components must be reproduced in optical reconstruction. We used a silhouette method, Mr. Bacchon mentioned before, to achieve this. This here shows the silhouette method that emulates light shielding by opaque objects. Please imagine there is an opaque object. And background light enter behind the object is sealed by this object. A real opaque object shield right behind the object in computer holography to prevent the object being a quantum image, the field of internet light is masked by the object's silhouette. In fact, we calculate the background here at the center of the object and then mask it by the object's silhouette. This is an example of silhouette masking technique. In this case, two-step calculation. The background field is propagated, numerically propagated, onto the object plane. And the background field is masked by the object's silhouette. In this case, the object is being a statue. And then superimposing the object field itself. And then this field is, again, propagated to the hologram plane. In this type of holography, we have no information about the object shape. So numerical reconstruction of the real field is used to produce the silhouette mask. This is captured field. And this is numerical reconstruction of this captured field. And we can get silhouette mask by binizing and reversing this appraised image. However, we have big problems in self-ocluded object. Masking technique by object's silhouette does not work well in self-ocluded object. This is an example. In this example, the background field is blocked in this silhouette. But the light from this surface pass through the front object. So in this case, we see, we look this object as partially see-through. To properly shield light of self-ocluded object, field must be masked by the silhouette of every polygon. This is an example. In this case, the light from this surface is blocked by this silhouette mask. So this is not see-through. However, masking by the polygon silhouette is very time consuming. So we developed a new method to speed up this masking technique. This is called a switchback method. In this technique, the field of object is numerically propagated back and forth and masked by the polygon silhouette. Unfortunately, many formulas are necessary for the explanation of the mechanism. Today, I have no time to explain the detail of this technique. Sorry. But I can show the validity of this technique. Loading ring is over there. The object of loading ring confirms the out-suit. So well-known to the parameter curve called loss curve, this curve is usually two-dimensional curve. But I extended this curve to three-dimensional curve by adding this additional term. This is visualization of the three-dimensional curve by computer graphics. You can see there is a lot of self-occlusion and mutual occlusion. So we need this switchback method. And this is the result. This loading ring is also exhibited in this room over there. Please check it. And this is new topic, one. We are now working on this topic. The first is resizing object in digitized holography. The captured fields by digitized holography include phase information. So we cannot easily resize the object unlike digital image. So we use lens to resize the object. But this lens is not real lens. This is a numerical lens. In fact, object field as an object plane is propagated to the lens plane and mass-fled lens function like this. And then this field is, again, propagated to the image plane. By this procedure, we can get the resize object field. This is computer holograph for resize object named Hamsters. This one is original object field. And this is a magnified enlarged object field. And the reduced object field is here. And this is the result. Hamsters is also exhibited over there. Please check. And this is the final topic. We are now working on how using multi-view point image as a background. Two dimensional digital illustration pictures were used for background so far. But multi-view point image is desirable for the background of the 3D scene because that shows the disparity. A holograph histogram is numerically produced and arranged in the 3D scene. This is an example. This is a multi-view point image like this. And in front of that, we arranged the CG model through the object. To calculate the wave field from multi-view point image, I used the same method as the holograph histogram. That the field is propagated to this view point in view plane and next, and next, and next. And then we propagate this view plane field backward in the object plane. So this is result of the simulation. And this holograph is also exhibited in this room. Please check. Oh, this is the final slide. I made many software to create this kind of computer holographs. And this software can be downloaded from our website. This is address or search keyword wave field tools. However, I'm sorry, all contents are written in Japanese. If you cannot understand Japanese, please ask your Japanese friend to translate it. I'm sorry for this kind of inconvenience. And the current software is provided as a library of shipwrap source. Windows software will be available in a few months. We are now working on this problem. But I'm not sure when this Windows software will be released. And the website contents will be translated into English someday, possibly. I'm sorry. Thank you very much. Thank you.
Recently, we presented some high-definition full-parallax CGHs calculated by our polygon-based method and fabricated by laser lithography system. These holograms, composed of more than billions pixels, produce very fine spatial 3D images of occluded virtual scenes and objects. The optically reconstructed images are comparable to that in classical holography. Strong sensation of depth caused by these high-definition CGHs has never been achieved by conventional 3D systems and pictures other than holography. In addition, we have also presented a new technique called “digitized holography.” In this technique, fringe caused by interference between a real existing object wave and reference wave is digitally recorded with wide area and high sampling density by using image sensors. The recorded object wave is incorporated in a virtual 3D scene constructed of CG-like 2D and 3D objects, and then, the virtual scene that keeps proper occluded relation is optically reconstructed by the technique of CGHs. This technique make it possible to digitally edit holograms after recording and will open the world of a novel digital art, referred to as Computer Holography. Various source materials can be input data in computer holography, for example, digital photos, illustrations, polygon-mesh 3D objects, multi-viewpoints images and captured fields of real existing objects. The 3D scene including these materials is designed employing a field-based digital-editing technique and optically reconstructed by the CGH technique as designers intend. We will present details of the technique as well as the concept of the computer holography. Furthermore, some of our works in computer holography will be demonstrated in the meeting.
10.5446/21037 (DOI)
Hello everybody. Works? Yeah? OK. My name is Jacques de Bien. I'd like to bring to you some observation on synthetic calligraphy. This is, in fact, a follow-up of my presentation that I made in Shenzhen in 2009. You were not at Shenzhen? Well, shame on you. You missed something. I used the term synthetic calligraphy because I want to differentiate from digital calligraphy. To me, digital calligraphy is generating from that fringe pattern. While synthetic calligraphy, it's a word that I started using with people from computer graphics. And they knew already what I was talking about, because they used to use synthetic images for computer graphic images. So I thought it's a good way to name what we call holographic stereograms, holographic panoramic grams, computer-generated holograms. Basically, it's simple. It's a set of many images that you use to make an hologram. Now, I should point out that I don't have a scientific background. I have a very thin experience with analog calligraphy. I come from perspective. I'm an art historian. I draw a lot, but I've never had any courses on drawing. I read books, look at art. And then in the 80s, I had this doubts about many things in art. And I thought, OK, I have to test. So I start experimenting. So I'm an art historian who does experimentations, which is very rare in the art history field. So I've always worked with optics and geometry. And I'm a specialist of perspective, special representations. And I end up in 1998 working with the team to develop a large format, full-color, holographic imager. The company was called XYZ Imaging. And we developed this machine in collaboration with JOLA. So you may know this technology under the name of the JOLA printer or Rabbit Hole printer, which recently was sold to STM Holographics in Toronto. I left in 2004 and continue making my experiments. What I was doing there was image quality analysis. I would make a few images on my computer, print them, and then put them on a wall using a spectrometer, different tools, CCD camera. I'd make analysis of MTF, Distortion Color Chart, and all that. It gave me a very hands-on experience on what was the image there and what I could do with this. So I left, and then I continued making some experiments. But since I like history, you'll notice in what I will show that I use my research in the history of art and science as a content and as a way to bring a little more information in my hologram. I could do geometric stuff or abstract stuff, but for me, there's no real difference. So I use a lot of history. So let's start with a few points from history. Late 19th century, Édouard Jeune-Marie, Étienne Jeune-Marie, excuse me, used chrono-photography. He's simply taking a lot of photograph very fast of something moving in front of him and used this to analyze movement. Later in his life, an interviewer asked him how it feels to be a pioneer in cinema, and he didn't like the question. He said, I have nothing to do with cinema. I never wanted to reconstruct movement. I just want to decompose it and analyze it. For many of you, this is an important guy, because he wrote a book called The Graphical Method. So when you use an histogram, when you use photography, when you use holography to analyze something, he's your father. A bit later, the Lumière Brothers. So I was saying that a few years after, the Lumière Brothers take Marie's idea and use the same kind of set of image of divisions of time. In fact, they are taking the movement and making a lot of image of that movement. And they invent the cinematograph, and they have the first movie. But notice something. On this image, the camera is dead center. It doesn't move. Everything moves inside the image. What the Lumière Brothers are doing is that they're using photography. For them, it's just photography with movement. It's still borrowing from another medium. So they send their cameraman all over Europe and ask them to film city. So they send Alexandre Promio to Venice. And he has this very nice idea. He puts his camera on a gondola and starts filming the canal. So here you have the first traveling shot in history. And usually, we say that Lumière invented the cinematograph, and Georges Méliès invented cinema because Méliès invented the special effects. But in fact, Promio makes the first narrative tool that belongs specifically to cinema. Lumière borrows from photography. Méliès borrows from theater. And Promio does something that no other technology can do at that moment. So I would open a little parenthesis and say, you have there the start of the cinematographic language. And then a few years after, another trick is adopted. And another one is invented. And another one, another one. And it continues after more than 100 years. You have similar cinematographic languages developed. So frankly, how can we ask holography to arrive after a few 50 years of development to have a full set of holographic language and be full grown artworks? We are just beginning, in fact, to develop those things. And what I think is interesting here is that they're using, in fact, a concept called fragmentation, a set of many images to bring a new kind of image. And this concept, I think, was very interesting to me. So I'd like to go even farther in history and bring you a quotation from Francesco Maria Grimaldi. He wrote the Optics Treaties in 1665 called Physics and Mathematics of Flight, Color, and Irredations. For you, I would say that this guy is very important, because he's the first one in history who used the word diffraction. He says, a fort lighting method is known here. And now I propose to name it diffraction. First time you have this word. And his definition of diffraction is very interesting. At certain moments, light is divided, being partitioned, multiplied, sectioned, and separated. I would say that with our modern eyes, this is a very simplistic definition. Ours is maybe more complex. But if you think about it, it's still valid. It's still true, maybe incomplete, but it's still true. So I thought that at that time when I read this, I was working on the development of the holographic major. And I was thinking about how many times we are breaking up the images and the content. So I thought about fragmentation. And I thought there is many levels. We have in synthetic holography, you start by dividing your field of view. You're in your computer graphic program. And then you set up your camera, virtual camera, and you take one image, and another, and another, and another, and you have 1,000 and more images of your whole field of view. And then you fragment a little bit more, because it was a direct right system. So we use an algorithm to break up group of images, recombine them to make new images that fit the optical system of the holographic major. You also have a fragmentation of the image in tree channel, red, green, and blue. You have a fragmentation of the hologram itself by making thousands of little holographic cells, or holographic pixel, or hologils, whatever you want to call them. I call them cells. And then to make your hologram of these cells, you use diffraction as it was defined by Grimaldi, so fragmentation of light. So I thought, with that much level of fragmentation, maybe you have fragmentation of content also, as so, fragmentation of narration. The way you will bring your information in your hologram may be fragmented too. So I decided to do an experiment. So 2005, I made the Tractatus holographic. For me, it's an experimental hologram. I wanted to check something, but I decided to use a subject related to history. So I made a fictitious treatise on holography that would be written in the 17th century. So it's like a perspective treatise, in fact, but 3D images, 3D illustrations that move when the observer walks in front of it. And I divided my field of view into three parts. From the left part of the field of view, you see two pages. From the right part of the field of view, you see two other images. And the center is just kept for the page to turn. So to me, that was showing something is that I can already double the content of information at my hologram by dividing my field of view. And I had this big time smear in the middle. And I remember when I arrived and look at it, I said, oh, no, that's a big defect. And people start to say, everybody, in fact, in the lab and after when I said, oh, I love this big blurry thing in the middle. So OK. So in 2009, just a few months after Shenzhen, I decided to push a little further this experiment. And I made another hologram, which you have a smaller version exhibited in the MIT Museum. So tonight, you will see it. It's 1.5 meter large by 30 centimeter. And it's a smaller version because the original one is 30 centimeter, 3 meter by 60. I've never exhibited the big one. Nobody wants it. It's too big. We worked. It was so important at XYZ-Maging that we make large size holograms. Nobody wants it. Anyway, you have here three mages of that hologram. That's one hologram, but three different compositions. And from the left of the field, part of the field of view, you see like a blurry landscape with a mountain and some calligraphy on top of it. In the middle, you see a book, paper, drawings, images, and calligraphy. And last part on the right, you have water. The books slowly disappear. And you have also calligraphy. So this one is different from practice to holography, who change radically from two-pages to two-other pages. And this one, every parts of it, every object, every form is independent from each other. So even though you have three main compositions that changed one to the other, it doesn't follow the same rhythm. It doesn't follow the same timeline, I would say. You have some object that start to transform others a bit later, others before. So it's very, very chaotic. And my idea was that you have to look at how people observe hologram. And its perception by itself is chaotic. Observation is even more chaotic. People go left, they go right, they reverse, they go in front, behind, they stop, they go slowly, fast. You have absolutely no control on that. So instead of doing something ordered like detractatus, which I know that people will follow a certain order because there's a text, well, most people go from page three and four and then one and two. So who cares? Why not fit this chaos of observation? So it's an hologram that goes all the way around. And I brought some in this text about in Chinese, in Arabic, Renaissance French, Latin, Greek, and Italian. I know that maybe somebody, I don't know this person, but I know that most people won't be able to understand what's going on there to interpret everything. So this is not important. So already? OK, so I use here five minutes. Yeah? I will try. OK, so you have many, many tricks here that use superposition, alignment, and different ways to bring content. For example, this text talks about the invention of writing by Kanji, which is a mythical Chinese character who looks at animal tracks and decide to make little drawings that means, that describe objects. And when you move, you will have some bird tracks that will appear behind on the surface of holograms. From another part, you will have an Arabic text on the research and experimental methodology. And in front of it, a drawing by Hal Atam, which is an important character in the history of optics. And this drawing is the first correct description of stereoscopic vision. And when you look at the hologram, the video, you will notice that the transformations between all those scenes adopt some kind of wavy movement. It's like a wave. That is not computer graphics. In fact, it is time smear. It's a form of an effect of time smear, which is a distortion here. Most of you don't like time smear. It's a defect. But this, I think, is a very interesting tool in narration with holography. So I added Leonardo da Vinci drawing on the interference pattern. And here I made a calligraphy that, when you move, just a little bit, it makes the big, colorful swirl. So I'd like to conclude with the last one. It takes a minute. One of this way of bringing narration I used is that you have a mountain at one point. And when you move, this mountain transform into a calligraphy that keeps the shape of the mountain. And this text, to me, is very important. It's from Tuo Si, who is a Chinese painter of the 12th century, who was very interested in how we represent space. And this text that you have on the left brings a few things that I think is very interesting. He described the variation of distance when you look at the mountain as the change of shape with every step one takes. And he also says that when you vary your point of view, you have different shape of a mountain as seen from every side. And then he concludes by saying a single mountain combines in itself several thousand appearance. Here you have fragmentation again, several thousand appearance. And you also have in the change of shape with every steps one takes, content metamorphosis. That things change when we change point of views. And I think this applies perfectly to synthetic calligraphy. And I would conclude like he did, should we not realize the fact? Thank you. Thank you.
A synthetic hologram is an optical system made of hundreds of images amalgamated in a structure of holographic cells. Each of these images represents a point of view on a three-dimensional space which makes us consider synthetic holography as a multiple points of view perspective system. In the composition of a computer graphics scene for a synthetic hologram, the field of view of the holographic image can be divided into several viewing zones. We can attribute these divisions to any object or image feature independently and operate different transformations on image content. In computer generated holography, we tend to consider content variations as a continuous animation much like a short movie. However, by composing sequential variations of image features in relation with spatial divisions, we can build new narrative forms distinct from linear cinematographic narration. When observers move freely and change their viewing positions, they travel from one field of view division to another. In synthetic holography, metamorphoses of image content are within the observer’s path. In all imaging Medias, the transformation of image features in synchronisation with the observer’s position is a rare occurrence. However, this is a predominant characteristic of synthetic holography. This paper describes some of my experimental works in the development of metamorphic holographic images.
10.5446/21042 (DOI)
My name's Tristan, and I'm going to be talking about hand drawing holograms. Hopefully some of you got a chance to see some of the plates I have against the wall over there and on this rotating turntable in the back. So I figure that some of you may be, there's kind of an ongoing debate of whether these images are actually, could be considered holographic. And while this is sort of outside the scope of the paper, I just wanted to address this really quickly. So holographic could be defined as over pertaining to a document written wholly in the author's own hand. So I don't really see what the debate is. So there's several problems with the holographic image, if we can call it that, produced by the abrasion hologram. And one of them is the presence of a pseudoscopic image. So for every point that we draw, every full circular scratch, two points of light appear. So this causes a problem. It's as if we're trying to draw with a pencil that has two points on it. So another problem that's been observed by many commentators is a swinging distortion. So as we move in front of the plate, the object appears to swing downward in an unnatural way. And then there's also been a lot of confusion in determining the absolute position of a point. Some people have said that the radius of the scratch should be half of the distance that we want for the object to appear and then where it is in relation to the plate. And then also, how do we plot these points? How can we easily draw whatever arbitrary object that we might want to draw without mathematically calculating a whole series of separate points and then plotting them on a grid, which is somewhat of a laborious process? So I'm going to address some of these questions. So to begin with, if you're not familiar with the basic optics, we have a circular scratch that I've drawn here. And a light source is reflected in the plate. So this is the reflected path of illumination and two points appear, this point appearing behind the plate and this point appearing in front of the plate. So Niels Abramson has described in his paper on the topic how we can predict that basic phenomenon. So as I mentioned, so this causes a problem. How do we separate out individual points so we can draw something useful? So William Beattie, who you may have also heard of, he's sort of the man who first codified and recognized a formal technique for drawing using this technique. Though I guess I've been told there's other earlier Hans Weill and a Russian man who's been working on this as well. It's been discovered by many different investigators independently. But at any rate, Beattie's solution was to separate the circle, the scratches into these shorter arcs, half arcs, this representing an above point and this representing a below point. This has the advantage that you can actually, by making shorter scratches, you can actually create animation effects. So you could have objects which turn on and off. You can have occlusion with objects passing in front of other objects. So you can see here, this is, if it's going to play, hopefully. Is it going? There we go. So here's a video of a hologram that was drawn using that technique of half arc. So this actually works very well when you have a wall mounted presentation with the illumination overhead. And so it's a passable solution. However, it causes problems when, if the viewing angle or the light source rotates fully around the plate. So you can see here, if we put the light source below, now all of a sudden this same scratch arc now represents an above point as opposed to a below point. So the result is this pseudoscopic flipping. So you can see this image of the face goes from a mask into a face. Okay. So Niels Abramson offered a very clever solution, a different solution, which is he said, well, why not let's just go with this limitation. If we have these two points, why don't we just draw an object that's reflected across the plane of the plate? So we're actually using every, the two pseudoscopic, the real and the virtual object to combine into one single object. So you can see here, here's the cube described by Abramson, that real and virtual objects. So this upper part is behind the plate and from here forward is in front of the plate. So okay, so we noticed though that there is still this swinging aberration. So we noticed here that as the view moves forward, the object kind of rotates downwards. So how can we, is there a solution to this? You can see it there. One more. Okay. So this is our plate. It turns out that the swinging aberration can be avoided completely if a tabletop display is acceptable. So here we have our point with a circle of radius r, illumination provided from above and reflected below along the, in the plate, sending the reflected path of illumination along the surface normal. So in this case, we can predict exactly where our point of light will appear by the intersection of the line of sight. So tracing the angle of the line of sight at the intersection of the reflected path of illumination. So we'll see above and below points as you can see. If we trace several of these, we now see that a series of concentric circles around the same center point will trace a line in space. So just a quick note that in the foregoing, we've assumed that lines of sight are parallel. In actuality, they converge at the observer's eye. So you can see that the pseudo, the virtual object is going to actually be deeper in reality. As the viewer moves back, however, the lines of sight actually infinitely approach parallel. So the further back you are, the less this distortion is a problem. Under viewing distances of a few feet, it tends not to be a problem. So let's see. So in this arrangement, also notice that where the viewing angle is 45 degrees, we have a 45, 4590 triangle here. So as we know, the radius of the scratch is going to be equal to the depth of the scratch. This actually turns out to be a very useful property for drawing these, because it means we can draw elevation drawings. I'll return to this point in a minute. Another result of this is that we can rotate the plate. We can rotate the plate and have the object appear to be consistent without distortion. So if we take the original cube construction described before, here's how it would look if we program it to a 45-degree viewing angle illuminated from above. So we can see it remains consistent under rotation. This can be generalized to any shape that is reflected across the plane of the plate. Of course, that's also a limitation. So we're still locked into forms that are reflected. Also another limitation is that we can't, we actually can't use BD's animation effects with objects, including other objects. So the implication is that we can only draw wireframe objects with this technique, because each arc length is going to reverse itself under the other full rotation, so 180-degree rotation. So laying that aside for a second, let's examine another orientation. In this case, we're going to start hinging the plate upwards from the surface of the table. As we do this, the surface normal of the plate is going to tip towards the observer, and with it, the reflected path of illumination is also going to rotate. So as you can see, as we do that, effectively, the holographic point here travels up the line of sight of the viewer. If we continue tracing that path, it's actually going to travel all the way up the line of sight until it's reflected into the eye of the viewer, and the point below is going to look at an infinite distance below. Of course, at this point, the object, the image of the light is also reflecting into the eye of the viewer, and the object is completely distorted. But if we continue our rotation, that intersection point is going to travel back down the line of sight of the viewer. Finally, reaching a point at 45 degrees from our table surface, where it will have traveled back down to its original altitude above the plate as when in the tabletop orientation. In other words, we now have a 45-degree angle between the viewing angle and the reflected path of illumination creating a situation where z here is equal to the radius. However, the major difference being that now in this orientation, notice that the points are no longer centered over the center point of the scratch, but now they're actually normal to the line of the scratch. Notice that what we've described here is the wall-mounted display. The viewer is normal to the plate, and the illumination is from 45 degrees overhead. This is important because we have two very different kinds of optical constructions that we need to consider if we're doing a table-mounted or a wall-mounted display. So here we can see if now if we draw our same series of concentric scratches, instead of representing a line that's normal to the plate through the center of the concentric circles, we now have a line that's at 45 degrees to the plate, and passing through the plate still at the concentric center. So what this means is if we return to our cube construction, all of the verticals of the cube are described by these concentric circles at the corners. It turns out that those lines in the wall-mounted display, as we've just learned, actually represent a line at 45 degrees to the plate. So what this means is that we're actually looking at a rhombic prism. So you can see these squares are parallel to the surface of the plate, but these are sort of elongated rhombuses. So again, if we look at our movie, we can see, hopefully, if it plays, let's try it again. So we can clearly see that we're actually looking at a rhombic prism. So this describes, this actually explains a lot of the swinging distortion, because what we thought we were looking at a cube is, in fact, actually an elongated and distorted shape. So how do we draw a cube that is, in fact, normal to the plate? Can we do it? In fact, if we return to looking at these arcs as independent units, it turns out that we can. So if we shift the above and below arcs upwards, we bring along with them our points of light. So now the unit of our scratch is, instead of being a circle, these two conjoined arc lengths. Now if we add these diminishing arc lengths all ending at this shared center point, we will have a line that's normal to the plate. So these become the units of our new scratch form. So now we're ready to define a cube construction that is normal to the plate for wall mounted display, and it looks like this. So you can see, here's our square. And the circular arc lengths are joined at the corner points, and then representative points of above and below at the midlinks. So here's a video of our corrected cube with 45 degree illumination viewed from normal. You can see that the swinging distortion has been much reduced, and as would be expected, where the upright points pass through the plate normal towards the angle of view, they actually approach dots when they pass our line of sight. Okay, so everything that we've described so far is dealing with this basic problem of the as above, so below construction. So we have these two points. We've worked around it in various ways, either by using half arcs or by just going with it and making a construction that uses that to our best advantage. But now let's look at a way that we can actually do away with solve this problem. It turns out that we can solve the problem by creating a tool where the profile of the scratch is other than symmetrical. So here we've used a 45 degree or rather a 90 degree diamond dragon graving tool with a specialized compass that allows us to adjust the angle of the scratch. And you can see here, this would be a normal scratch that reflects scatters light in both directions equally. Here we've created a blocking wall that only allows light to scatter in this direction. So this would represent, here we would have C represents a point of light below the plate only and D represents a point of light above the plate only. And here we have both above and below images. So now we have three different kinds of marks that we can make. So here's a compass that I machined to allow this adjustment. And an alternate version, this one just has a bent tip. And by flipping this, rotating this over, I can change it from above to below. So here's a video showing the first test that I did of a, you know, can we draw a cube above a circle which otherwise was impossible with prior methods. And a little bit more fun, we can then go on to this kind of pedal form, which some of you might have seen, which is not, it's wholly above the plate and there's no reflection. Okay. So we can also now complete the arcs of our previously described wall mounted display, again into full circles. Okay. So now, I don't have a lot of time, but I want to very quickly just describe a couple other areas of exploration. This is a visualization that I did realizing that if we take our original formation of the tabletop display and instead of where our holographic point appeared, if we put a small bead or other object in place, it will cast a shadow. So from our reference illumination down onto the plate. And now if we substitute our observer's eye with another point source light, it will cast second shadow at the circumference of the circle. What this means is that we have a reference projection and an object projection. The reference projection represents the stationary point of the compass. The object projection represents the traveling point of the compass that inscribes the scratch. So as you can see, if we rotated this plate, this projected point, S, here, would scribe this circle. So what does this mean? It means that we can very quickly draw any holographic image that we might want to describe by imagining a reference projection and an object projection. So here, this is a line that's inclined at this angle to the plate. So essentially what I've done is drawn a reference line because the line will cast from the shadow above just directly aligned onto the plate. And then by drawing the other shadow at this angle, since our illumination is at 45 degrees, as we described before, and radius will be equal to height, we can simply just draw an elevation line of the inclination that we wish to scribe and then draw these perpendicular lines up to it and scratch any point along that line. If the line crosses through the plate, then we should scratch below lines using our above and below tool. So these would be below lines and this would be an above line. Neither does our object and reference lines have to be lines. In this case, our reference line has become an ellipse. Our object line has become a line. So in this case, this describes a circle at 45 degrees to the plate. And this makes sense because if we take our reference, of course, is from directly above and what is the projection of a circle at 45 degrees straight down onto the plate but at the 45 degree ellipse. On the other hand, if we have light shining on that same 45 degree circle from 45 degrees, it will project a line at its smallest point. So we get this kind of smashed form here. And see. Okay, so then the other area of study is a spiral graph drawing tool. So what happens if we actually start looking at epitrocoital, hypotrocoital, and elliptical scratch patterns? I don't have time to go into all the details of this and I only really touch on it in the paper. But I just wanted to show you, here's a drawing illustrating how we can predict the forms that will be seen, the holographic forms. In this case, a series of rotated ellipses. It turns out that actually now under rotation, instead of having one consistent point that appears at a consistent altitude, we have an eccentric orbit. So you can see, here's a video showing a series, this was done with spiral graph. Notice that the points in this wheel, these dots, they actually rotate at twice the rotation of the plate. So just two more slides. Okay, and this is also a very interesting form. It's a little hard to see, but this is an expanding spiral that was done with a spiral graph scratch. So you can see here this sort of spiral form that actually grows as it rotates. Okay. And the final area that I just wanted to mention is that another area that I've been working on is using conventional etching press to reproduce these holograms. So I've been embossing them. I've been working with this printmaker Richard Nielsen of Untitled Prints and Editions in Los Angeles. And we're actually successfully taking master hologram scratched plates, running them through an etching press and pressing them into foil and other reflective materials. And it turns out that using a special tool that's designed to raise maximum amount of burr, which the burr is the sort of material displaced and sticking out from the surface of the plate, we can actually create an embossed hologram that has all of the same properties as the original hologram. So this offers a really exciting possibility for people who want to draw hand-drawn holograms. All of a sudden, you can take all the time to draw a very complicated form and then reproduce it. So anyways, it's my hope that I think this is a very interesting area of study and it's my hope that this will inspire others to give a little more attention to this field. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
The depth illusion apparent in light reflected from circular scratch patterns has been noted independently by many commentators since the 1930’s (e.g. Weil, 1934; Lott, 1963; Walker, 1989). In the early 1990’s William Beaty compared this illusion to holography and formalized a technique for creating 3D drawings by hand, which he called “scratchograms” or “abrasion holography.” Several recent publications (Regg et al., 2010; Augier & Sánchez, 2011; Brand, 2011) explore computer-aided methods of producing abrasion-type holograms, using CNC engravers, and milling machines. Very little, however has been published in the way of expanding the techniques available for hand drawing abrasion holograms. I explore new, hand-drawn approaches to abrasion holography, presenting a variety of techniques that expand the possibilities of the medium. Complex curves and organic forms can be constructed by hand more easily and intuitively than previously described methods, allowing for more diverse and artistic effects to be achieved. In an analysis of reconstruction lighting and viewing geometries, I suggest solutions to reduce or eliminate distortions present in abrasion holograms (such as the “swinging” sensation experienced with motion parallax). Various tools, materials, and scratch geometries are considered. I also present a new class of hand-drawn abrasion holograms that exhibit novel animation effects. In conclusion, I outline preliminary findings related to the duplication of hand-drawn holograms using a simple foil embossing process. I detail these findings along with illustrations and test plates. I also will show examples of artistic works exploring the medium of hand-drawn abrasion holography. In the field of hand-drawn abrasion holography we have, so to speak, only scratched the surface of what is possible. As a medium, hand-drawn abrasion holography offers many interesting and as-yet unexplored possibilities. It is my hope that the investigation presented here will inspire further exploration of this unique medium.
10.5446/21043 (DOI)
First I have to say the Ian. I'm 20 years since the first ESDH I was visiting. There was the student in England at the Royal College of Art and when I came to Germany I was setting up a small studio of one by two meter and made some works and from there on the German Aerospace DL Air in Brunschweig supported me to put up a bigger studio for holography and since then I was working for them making some illustration. Let's see I'm showing you video. Let's push it there and see if it works well. Come on. It's still not there. So while this movie is running I'm presenting here some research from the Institute of Technical Thermodynamics at the German Aerospace Center, Deutsche Zentrum für Luft und Raumfahrt DL Air in Stuttgart and the work of several leading scientists. One of them is Dr. Josef Kallow. He's head of the electrochemical system in DL Air and he's leading a project which is a plane which is flying with fully with electrochemical and with electrical pamp cells. So sorry about my voice yesterday I mean the problem with the ECH parties I think they're getting too long and it goes really on my voice. So the Electro and the engineer technology department works on development of efficient electrochemical engineer converters mainly batteries, fuel cell and electrolyzer, the importance for future power system both in stationary power supply and in electro compatibility increase continuously. The department activity range from cell design manufacturing process and diagnostic to system optimization and demonstration. The scientific and engineering shullings for electrochemical storage technology and energy conversation consists of handling the conflict goals of efficiency operating life, convenience, safety and cost. In the last two years I have been able to illustrate power stage storage and energy producing systems and for multi-view presentation and the final project of this is should be in hologram. So how to make out of this system in hologram? We are recording at the moment the Antares with a stereoscopic system and you have just two pair of two streams of video. So what you have to do to make multi-view or holographic images you need to divide it in multi-view images. There is a program one you can buy in Dresden for 5000 euro which is making creating a depth map and a stereo image and then you dividing and you have a mathematical system to calculate some images with bean and some outside. And I wanted to show as well here something what we done. Let's see another video. It's the German aerospace I mean the with a 3D scan here the Schloss Neustrandstein they had a spline image cloud of work which is first put in a spline converted mapped and it's 3D Studio Max Cinema Fee Day files was later and exported imported and we use this kind of image as well to create an hologram. The first thing is I did exact what with Zebra doing it put a camera in was let it fly from left to right. When was we was printing it the first time that the first image it was absolutely blurred. So what you have to do is we had to slice the whole building and everything what we do converted it again and put the camera one kilometer far away and then we were simulating a plane flying our way inside and making photos from outside inside. Can you imagine this? So you have to extend your distance so the whole beauty of the Schloss what has been recorded on a hologram. Another one with the Berlin Atlas Hoof in the Dale Air. We did the hologram of Berlin. We was calculating on like Google Maps and images but we had the original 3D data and then we was printing this with rabbit holes and it works very very nice. You have the Reichstag and you can see and the images in full depth and it has not application to make something for marketing or something beautiful. The reason what we are doing is to testing out the data in illustration to have to find out if the research and application has much more different application. Where the application are for example here is it possible to make a kind of street map or to work something or we made from hills. Is it easier when we have a holographic map and using it to calculate your bike tour if you're going up the hills and down the hills and things like this for to illustrate and to testing out all this work has been done. So let's put this one away again. So, this one actually one of the very early works what we have done is with the whole system to work in with the day layer. So, sorry that the PowerPoint here does not work on this pit say but I can show it to you. So, we have a computer program which has an XYZ axis of space and you're all coming you maybe remember 20 years ago when you was flying from Europe to or from Asia to America you arrive in time but you was then having delay because you was 20 minutes driving flying around the airport before you can landing. What they did was calculating a program which had four four-dimensional data but as well weather condition arrival time and all other planes and to transmit all this information what they did was putting it in a cloud of of of datas which we then in in the end worked out how we can illustrate this kind of program. So, finally we did it with a game Tannadime had a Amiga 500 in this time and we did all this calculation of the plane and flights and path and with Rob Mondes Daiho system we made the first four-dimensional flying plane and and show how how this system works. So, we can see in with the Antares and with with the with the Yava scanning system that with the time that one of the target is scientific research to illustrate is a good big market because to see it finally in 3D it's a very interesting position. For the Antares when we did the Antares Del Air H2 the hologram itself has contained not enough information because if you go from right to left okay you have a small animation but finally what we just Ian was saying about the Toshiba for $8,000 you have you have a 3D movie where you can really have a long time of of movie which is showing how to fly how the plane is working and you can present it everywhere on any convention very easily. You don't have to set up for light you don't have to think about it this Toshiba has just a small very mini bit say behind you don't have problems like this that you have a two-old computer with PowerPoint and it works you you you projected in 3D people even can like the Kiosk system it can interact with you it's a wonderful system so why doing holograms still? Holograms has has another possibility because in in in this thing we have to set up with with with the 3D system and monitor you have electronic you have to switch it on and you have to install it and you have to get it on you have to start it. What is so wonderful when we worked with Geo-Wheeler-Rapid-Tolls or something you have a hard copy you just take it in your back you just click it somewhere on on the wall but you have to take care of the lighting so if so one has the electronic the other one has still electronic it has a light so if you would have no light at all this application for any kind of scientific illustration in a new science magazine would be wonderful so I'm looking forward what kind of development comes on that we having more application than just this any question I kept it very short. We should have put it on our pz and because this one I don't know why it does not work anyway. You mentioned that you use a program to convert stereo to depth map to multi-view did you develop that yourself or did you bite somewhere? You're talking about the depth map. The depth map program I said it's developed in Dresden it's been used to normally the slice what I just mentioned is used to transform from single view or double view to multi-view so you can use single view your illustration and just putting back and forward like you've done in the past for yeah I mean it was it developed by DLR or it's some external no no it's it's this one is a private company I mean it works similar like the blue box which you can have from Toshiba or something that there are some resolution for professionals which cost $20,000 and you can put it in and transform it so the DLR itself just use it but the research is not for 3D but in Berlin it's a program which with side tracking or something so we worked as well with the Fraunhofer Institute some project and they do the development for certain things so we have one company in Dresden one in Berlin and so I'm just since 22 years a hawkling from one studio one department to the other and we do the as a 3D visualization and this is some application for holography and for 3D monitor. Thank you.
The digital hologram of the 4D arrival management system of flight guidance, the holographic animation of the Reichstag in Berlin, a flight with the atoms in a fuel cell for a glasses free multi barrier monitor and the first electric noiseless hydrogen driven plane of the world, the Antares H2 documented with a stereo rig: The German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; in this text used DLR) researches give an incredible data base for new image making, which, some artists, like the author, have transformed into a visual stage. The outputs of 3D illustrations have been in S3D, M3D and holographic media.
10.5446/21048 (DOI)
Thank you. I got in last night and that party that was being discussed in the dorm yesterday, it turns out it was scheduled in my room. So I got to bed sometime after the police left and I'm here. So when we begin our research, begin a research project, we try to do a visualization, something that will sort of anticipate where we're going and you can see we're not there yet, but we're certainly throwing every trick in the book at bringing this about. And the thing that our corporate partners in the haptics industry are excited about here is the fact that it's autosterioscopic and the fact that it can be co-visual. It means that in this visualization, the user has his hand in the same place as the virtual hand that's holding the virtual scalpel. And this means a lot in training where with conventional haptic devices, you're working the device over here and you're looking at a screen over here. The other thing that worked well that we noticed almost immediately was when we were playing around initially with haptic physics environments on the computer screen, it was difficult and we observed a lot of people doing this. It was difficult for them to understand the spatial relationship between objects on the 2D screen because the haptic environment is actually a 3D environment. It's a thing that is spelled out in 3D. So when we use the hologram, once the individual got the haptic device aligned with some object in the holographic space, they were able to readily make that spatial relationship between objects. Right, so haptics are handshaking is haptic communication. Haptic technology is a different thing. This is part of the brain that synthesizes touch and that's for the sensory. So it's a big part of our perception. So haptics technology takes a number of different forms as you'll see, but its principal form is that of a joystick that pays back. So it's a thing that if it wants to, can fight with you. Typically a group of servo motors, which is what we have here, and optical encoders that feedback the position of that object and we go into specialized software and define objects as being deformable objects or objects that are rigid bodies but movable or objects that are rigid and not movable. And then all of those objects are defined in three-dimensional space. So our group, this is a new acronym, I'm going to stumble over this, prototype holographics for art and science explorations, is a group of artist designers from time to time scientists who come by and contribute to the research there. What we're trying to do, none of the stuff you're going to see today is particularly new, but what we've done is we try to layer things to make them seamless. And one of our very first successful examples of this was we took a pulse hologram of some scientists blowing bubbles and then we created one of our graphics artists, Philly Patterson, which obviously talented, created video graphics that had scintillation and everything you'd need to marry that with the hologram. And we brought in people and showed it to them and regularly fooled people into thinking it was one illusion or one construct. So the critique of the medium in our lab is always, well, what you're doing is great. You know, they pick their job off the floor when they look at these direct right holograms and they go, yeah, but it's not interactive. So photographs aren't interactive. We have a dispense with photographs, but what we're trying to do here then is to take sound and haptics and a layered visual experience and come up with something that's interactive and fun or educational. And you know, the next time you see us, we might have holograms that smell bad or something. So we start with a model and then that model gets scanned with a virtual camera. And we take that very same model and strip it of everything that, you know, is important to the holographic process. What we need for the haptic environment is just geometry. And then we import that geometry into the haptic environment, produce a hologram from the same thing and then super impose them together. And if you do it right, you can touch the hologram. So normally I have a bunch of credits at the end and I'm a bit talkative, so I don't often get to them. So I'm interspersing the people. I get these hopelessly complex ideas and I take them and get grant money and corporate support and then people like Natalie here do the work. So here's Natalie at the thing and really our presentation is over there and we need you to go and try it if you haven't tried it already. But I'll talk a bit about the components that are here. Basically it's a thing that is tunable in every possible way so that we can explore different configurations and so on. But in the interest of making it interactive, we've added a number of components which are in some cases only partially deployed. The first is a hologram holder which right now we're operating in transmission mode where we've got a laser illuminated hologram. And that of course gives us that sharp for deep thing that allows us to to marry the video which has to be on a different plane. The video in this case is typically 3D graphics so we're using something called an organic LED display. This is the future of cell phones and everything as far as I can see. Have a look at it at some point. You can see it at 189 degrees. You know, it's just completely the same from any angle. And so that really helped us out in terms of passing off this illusion of the video being somehow part of the hologram. As you'll see, OLED displays are actually in their native form transparent. So where you would see black on a normal display, you can see right through the OLED display. And that's going to be very, very helpful. It's one being released next month for sale by Planetronics and we're hoping to get our hands on one. It's quite a bit larger than what we have and then we'll have a larger holographic display as well. So this laser module, the thing that is illuminating the hologram and then of course when you, everyone here understands that the hologram has parallax or at least in this case horizontal parallax. But in order to have the video compare with that, and what we've done is a cheat. We just, when you're playing the drums over there, the drumstick, when you bring it towards you, it increases in size. You push it away, it gets smaller. But what we're really working toward is the connect device over there and what that does is it sees the, so this is a single user interface and it means that we can track the person's head and we can change that video perspective to make it conform to the holographic parallax view. We also have a row of super bright LEDs. So we've done a little bit of exploration in reflection mode and again it's going to be necessary to take that OLED display and sandwich them together in a way that they look like one thing and that's really going to compress what we're doing quite a bit size-wise. But the business with the super bright LEDs is that they are, there are six sets of RGB components. LED is being fairly narrow bandwidth about 30 nanometers and direct right holography being, if you want, very narrow bandwidth. What we can do is we can turn on and introduce a view from one angle and then change that view by having a hologram that has multiple reference angles originally and allow us to, for example, zoom in. So it's another way in which it's interactive. We can zoom in or change the holographic scene by changing the angle of reconstruction. We can also independently fade up the red component, marry that with the blue component, animate the blue component and then introduce the green component. So we're intending on multiplexing that interactivity through these super bright LEDs then. And then we bought a killer computer that takes all this stuff in, processes it in real time and displays it. And the haptic device in particular, our new haptic device is incredibly interactive that way, very responsive unlike the previous one. Didn't quite get Alex's picture for this credit, but Alex Lavric is a key player in Canada and in industry now, but he comes from an artist background. He's a fabricator, special effects guy and inventor. None of this would be here without Alex. I could have done something informative like do a bunch of research into the fabulous work that's going on at MIT and haptics, but my research just, well, you'll see. So I had to bring it and present this stuff instead. This is Disney who's got a kind of a chair or it's a, it can be an applique, I think it was on your back. And they figured out that if they take these transducers, they can create vibrations here and here and actually create a sensation in the middle of your back. And so they've got a whole chart and a whole demonstration of sensations that they create with this matrix of presumably PC electric transducers or something. And we're not sure what it's for, but gaming probably. This is a thing that many of you have seen already, but I think it's worth looking at. I did write the authors of this and asked them whether this was a hologram or not. I suspect it's not. It's, as Michael demonstrated last night in video form, there are lots of people that think they're a hologram and they're not. When research scientists think that they're working in holography and they're not, that's the different thing. But this is actually a very cool invention. These are Wii devices. So very much like the Kinect. So it senses his hand and moves the video accordingly. But the best part is not that. It's this ultrasonic device that focuses ultrasonics onto my fingertips so that I feel, and of course, again, the Wii is tracking, but I'm able to feel the sensation of that ball on my fingertips, or in this case on my palm. So these ultrasonic arrays then are able to focus the ball and controlable, as you can see here. This work done by the University of Southern California Mutual Touch over the net. I think we all know where that's headed. And this is just completely bizarre, but I see that over a million people have looked at this. So I brought it to show you. He takes a little while to get it out. I guess better. So the haptic part's coming up. So this is sort of the inside of the thing. And this is the inventor. You guys should possibly get some help. Okay. I know everyone hates people reading their PowerPoints, so I won't read it. So we're going to have a look at some work in a couple of other projects that Finley's done, but when we're looking at this computer-generated drum kit, that's Finley's invention and the rim shot. That's Andre. So one of the ways in which we felt that our haptic holography device might be helpful is in medical training, training medical procedures that are critical. And obviously they wouldn't be too involved because of the limitations of the animation and so on in the hologram. But the business of a spinal tap, lumbar puncture, currently how that's done is that they have these latex dollies, some of them full-sized people, and people who are training in this procedure go in and poke this dummy in the back. And what Natalie's done, Natalie created a hologram which she'll show you if you get her to pull it out of the portfolio there. And what we're proposing is a thing that it's not photorealistic. It's a thing that teaches people the process of this spinal tap through a number of ways that would be impossible with the latex dummy. For example, there are 3D graticules here in space that would help the individual orient that needle going in. So it's not just about where it goes in. It's the attitude of the thing when it goes in. We also, with our super bright LED array, we can zoom that in and have the thing 300% larger than human size so that that new worker is exploring this procedure in a much less critical environment. And then zoom that out and zoom it out again. We can also have text and audio response, that sort of thing. So future improvements to this, we're going to be using direct right holograms like those made at rabbit holes and zebra imaging. The super bright LED which is transparent. And these are the arguments I've made about why we think this is a good idea. Paul, one of our programmers who's programming the haptic device. Our corporate sponsors who make the coolest looking haptic device on the planet. These are some of the other projects we're working on. Natalie or Alex can show you an iPad that has a holographic maze superimposed over it. And you can play that game, three dimensional game. These are some of the short videos that we're going to use to make holographic interfaces for your iPhone. So this magnifying glass which is holographic will actually magnify the video underneath it. This is Finley's creation. We don't know why this, you'd want that on your phone but probably be very helpful. This is a sundial that because your iPhone is tiltable and it knows where the sun is in Toronto, it took lock in the afternoon June 26. It can tell you the time when you get the reconstruction angle right. There's the maze. We need your help. We have to report on this stuff. We have to do field studies. What better place than here to do field studies on a haptic holography rig. But we need you to actually send comments. So I'm going to have this up again. Please, please write us about your responses to the workstation and how you're doing this. Nothing happens without these interns in the lab. There's some contact information. I just want to add one further thing. When Steve Benton in the 80s came to speak at our institution, OCAD University, at question period he was asked, is holography the future of 3D imaging? Which he answered no. All the holographers were like, Steve, take it back. He then went on to say that holography would be very likely part of a hybrid something. And so we hope that some of this research maybe points that way towards that thing that Benton used to call synthetic reality, sort of a layered synthetic reality or a hybrid structure. Thank you.
Haptic Holography, was perhaps, first proposed by workers at MIT in the 80s. The Media Lab, headed up by Dr. Stephen Benton,with published papers by Wendy Plesiak and Ravi Pappuh. Recent developments in both the technology of digital holography and haptics have made it practical to conduct further investigations. Haptic holography is auto-stereoscopic and provides co-axial viewing for the user. Haptic holography may find application in medical & surgical training and as a new form of synthetic reality for artists and designers.
10.5446/21054 (DOI)
OK, good morning. My computer is very slow in booting, so OK. So I usually talking about computer-generated hologram or fringe printer or video display, but today I would talk something different. It's for the redeveloper software to for education of hologram. So the title is here, and I'm Hiroshi Yoshikawa from New Japan University, Japan. And here is the motivation of our research. So running with real optical component is the best way, of course. But for the holography, there are so many special and expensive components. So for the beginner, it is hard. Oh. Slow, very slow. Yeah, so for the beginner, it is difficult to use the real component, but just reading textbook is not enough. And the self-study in lab is hard. So the beginner need someone teaching about hologram. So we are developing a learning tool for optical setup for beginners, so the beginner can study by himself, by him or herself. And this shows the conceptual idea of the research. So as you know, we use augmented reality technology. So all you need, all we need is the computer and the camera. And in our case, we use some marker. So each marker has some symbol, in this case, a letter. And so the program distinguish, so which marker is designed for some component. And so first, taking video and put marker, so the computer distinguish, so which component it should be. So the computer graphics object is superposed on the image. So here is the actual view of the learning. So the learner is aligning a component. And on the computer screen, there is a laser mirrors and other component. And the very good thing with this tool is you can see, light beam. OK. So with that smoke or, say, and dry ice, it's very useful and easy to understand the role of components. So this shows how it works. So first, so we put the markers, then the software find the positions of each markers, in this case, laser and polarizing beam split and spatial light filter. Then radiate laser light and draw light pattern, light path. And the so first calculating light path from laser and simulating correct light propagation. And we have implemented some visual effects on components. So the one category is changing paths with polarization beam split mirrors and objective lens as a spatial filter and collimating lens. And we also have another category of the components, which changing power or measuring power with half wave plate and polarizers. Actually, half wave plate works with the polarizing beam split to change the beam ratio. And we also implemented the power meter to measure the intensity of light. And after arranging a setup, then software judge the setup is correct or not. And we have implemented for judgment, for denichoke and Fresnel, and also transfer hologram. And here shows how we can manipulate light. So on the top right, we can split light with polarizing beam splitter. And of course, you can change the right direction with the mirror. And you can diverge the light with objective lens. And also you can collimate light with collimator lens. And as you know, with half wave plate, then you can change the direction of the polarization and with polarizing beam splitter. Then you can adjust the beam ratio of two beams. But in this software, the half wave plate is a computer graphics. So you cannot touch. And it means you cannot adjust the angle of the half wave plate. So we use another marker, which works like a dial or adjuster. So you can adjust the angle of the half wave plate. Then you can change the polarization angle. Then you can change the beam ratio. And here is an optical power meter. So in this case, the user changing the polarization angle. So the optical power of the transmitted light is continuously changing. So the screen says that the measured optical power. And here. And also we can do the same thing as a polarizer. And OK, I will show you the quick demo. I'm not sure it works. OK. First I need to find the laser. I'm having auto focus doesn't work correctly. Let me try. Oh, no. It must be laser. Oh, but it shows power meter. Oh, power meter. Oh, multi-functional. OK, now it's laser. And oh. Yeah, this software is very delicate. OK, you can see it later. And I turn on the light. OK. And remember, it is totally safe. So I can aim my laser to you. It needs some training. OK, anyway. So let me make setup for a Denyshock hologram. First I need you to find the component. Yeah, I can object hologram. And the spatial field target. And. Yeah, yeah, yeah. Oh, some. No. Yes. Oops. OK. Yeah, it looks. It's OK, but it doesn't work. OK, so my excuse is because the room is dark, so it doesn't work. OK. And here, if I put correctly, then this objective lens diverges the light. And here is a plate. And here is the object for hologram. Then. Oh. Oops. OK. And actually, it has a judge. So the program finds that my setup is correct or not. And of course, you need a camera holder. Because I only can use a single hand. It's not good. Anyway. And also, the good thing with this software, it has a so-called guided mode. Or say, you can cheat. Because, for example, if you want to make a setup for Danish Corrogram, and if you are a beginner and have no idea how to arrange that, of course, you can need a textbook. But this software has a cheating mode. So if you set a cheating mode, so here, there will be an image of the suggested alignment. So you can just put the component as directed. And as I said, it has a judgment. So the software finds your arrangement is good or not. And if the arrangement is bad, then the program tells which is wrong. OK. And this is as I told, the guidance mode or cheating mode. So you can easily find how to arrange your setup. And this is guidance for the transfer hologram. Yeah, transmission. It's a reflection type transmission. And after finishing arrangement, you can click some switch, then some window is, sorry, this is Japanese, but it shows the angle of the hologram and the object beam and also the angle between the object beam and the reference beam. And it also shows the optical difference and power, exposure energy, and beam ratio. So we also evaluate our tool is effective or not. So we are teaching the Danish hologram for beginners, actually, our students. And we measured setup time with real optical components. So first, run OK. So we divide our students two categories. One is with artificial reality learning and without artificial reality learning. So first, we explain about the optical setup. And one group uses this tool, this software. The other group is just reading book. Then we bring the student to the lab and the student setting up the data. And the student setting up the optical component. And we measure the time, setup time, and also the setup is correct or not. So with AR learning, there are maybe nine students. And the meantime to build up is less than three minutes. And all students successfully set up. Without AR learning, so two students of seven, so total is seven. But two students failure to set up correctly. And the setup time, the meantime is more than five minutes. It's quite different. So we believe our tool is very good to teach hologram setup. OK, in conclusion, we are stimulating light propagation for visual understanding. I think it's a very important part of our software. And adjusting light power is also possible. And judging correctness of setup and learning transfer is also implemented. And we evaluated the effectiveness of AR learning. So the beginners can learn basics of optical setup by themselves. Thank you for your attention. Thank you, Hiroshi. Do we have any questions from the audience? Yes, please. Does the laser have a coherence length that you can put in with the program, and does the beam path difference check against the coherence length of the laser? Yeah, actually, you can, of course, this is software. So you can set the coherence length of the laser. And of course, the software checks the difference of two beams. Do you laugh at the right software, or is something adapted to something you say? It's quite adaptable, similar to the thing you're doing. And so the way it's used, that comes, if you have that software, is it? Or is it adapted to something else itself? Oh, you mean that is somewhere else? AR software. AR? Will you modify it from something that exists? Oh, yeah, actually, we use AR toolkit. It's a software library. So we apply the AR toolkit for the hologram setup. Yes. Is that software available somehow? Is the software available for download? Not yet. But if there are many requests, then we will consider. Thank you, Noroshi.
In case of teaching optical system construction, it is difficult to prepare the optical components for the attendance student. However the tangible learning is very important to master the optical system construction. Developing the inexpensive system which provide the experience learning helps learner understand easily. Therefore, we propose the new education system for construction of optical setup with the augmented reality. To use the augmented reality, the proposed system can simulate the optical system construction by the direct hand control. Also, this system only requires a inexpensive web camera, printed makers and a personal computer. Since this system does not require the darkroom and the expensive optical equipments, the learner can study anytime, anywhere when they want to do.
10.5446/21055 (DOI)
Good morning ladies and gentlemen. My name is Milan Quieto and I am from the Czech Technical University in Prague. And I would like to speak about new recording material with silver halit nanoparticles for optical holography. A short outline at the beginning. So at first I would like to describe mechanisms of mass transport in photopolymers and in silver halite emulsions. It is necessary for understanding of our new concepts of the new material. So then I want to describe photopolymers with nanoparticles in general which are currently used for production of special photonic structures. And then I will introduce the concept of our new material. I will give some characteristics of it and I will show some measurements. So at first I'll start with the photopolymer. It is a volume face recording medium. It is a self-developing medium. And in our optical physics group in Prague we prepare our own photopolymers such as acrylamide based or photopolymers with epoxy matrix. We studied also the processes which are running during the exposure. And the basic composition of our materials is or they are composed of monomers, initiator and polymer matrix. Before the exposure the components are homogeneously dispersed in the layer and when we expose the material with a harmonic interference field we will initiate in the bright areas of the interference field polymerization process. Polymer chains grow and they consume monomers from the surrounding and additional monomers diffuse from the dark areas of the interference field. So after the exposure in the material is form modulation of polymer density and hence refractive index grating. Mass transport occurs also in silver halide emulsion when the process, the reorganization process is applied. In our university we use our own silver halide emulsions which are based on silver bromide grains. Their size is about 30 nanometers and sometimes ago we studied the mechanism of formation of volume grating in this media. So at first I would like to show you the developing process and scanning image of scanning electron microscope of the exposed and developed layer. So I don't know if you can see that there are two types of grains. In the green areas there are smaller grains which are more elliptical than in the other areas and the smaller grains are the metallic silver after the development and the rest is the circular grains is the original nanoparticles of silver bromide. So when the developed material is bleached to form the face hologram we can use for the bleaching process several techniques. The first two techniques conventional and reverse bleaching are I would say old-fashioned. Today mainly the reorganization process is used. I will not go into details about these processes I just will show the images from scanning electron microscope of conventional bleaching process and of the reverse bleaching process. In both cases you can see some stripes in the medium which means that some material was completely removed from the layer. Here you can see the modulation of the density of nanoparticles. I would like to speak a little more about the process of reorganization bleach. When we bleach the emulsion in fact we convert the metallic silver back to silver bromide but the process is a little bit complicated because there is diffusion of the silver bromide to original silver bromide particles which grows due to the affinity of the small grains to the bigger grains. After the bleaching process here you can see the layer and in the exposed area all the silver halide materials is completely removed. It is not removed it is in fact moved to the non illuminated areas of the interference field. We did some more measurements with scanning electron microscope and before I showed you an overexposed material. In the overexposed material the size of the grains increase very much and if you compare it with this image where the material was optimally exposed you can see the difference between these grains and these grains. The reorganization process is very effective but it causes the grains to grow up so we must carefully control the composition of the bleacher of the exposure and stuff like this. We made also other measurements here in this picture is exposure of the material with different spatial period. We wanted to show that the transport of mass occurs also on longer distances as you can see. I will move on to photopoly mass with nano particles which is a subject of intensive study in the last 10 years. It is again a volume recording material and this material combines properties of the periodic diffraction structures with special properties of nano particles such as refractive index, absorption, luminous and properties magnetic. In these materials you can make very nice photonic structures such as lasers with distributed feedbacks, holographic sensors and others. Their composition is very similar to standard photopoly mass but nano particles are embedded. The recording mechanism in the first part is also similar but when the polymer chains are formed in the bright areas of the interference field the nano particles are extruded from this bright areas to the non illuminated areas. It is due to the non-competibility of polymers with nano particles. In this way the redistribution of nano particles is formed in the photopolymer recording materials with nano particles. Now I would like to introduce our concept. The main idea is that we will combine the silver bromide nano particles in gelatin with radical photopolymerization system. The silver bromide nano particles we can prepare by standard method and through the radical photopolymerization which is inhomogeneous we can make the redistribution of the nano particles in the nanoparticles within the layer. The advantage of this material is that the silver bromide grains remain the same size after the exposure. It is different from the silver halide emulsion. We have designed such a chemical composition of the material. We use the silver bromide nanoparticles, gelatin matrix, acrylic acids and dimethyl acrylamide monomers and the radicals are formed through photo initiator and co-initiator. The self-developing ability of photopolymers enables a real-time detection of the formation process of a grating. We built for it such a setup. The beam from the laser is expanded through the spatial filter and collimated through the lens. One part of it goes directly through the material and the other part is reflected by the mirror in the place where both waves are overlapping. The sample of the photopolymer material is placed. When we switch the exposure we can continuously measure its diffraction efficiency. We use in this case collimated laser ray out which wavelength is different from the absorbing wavelengths. We built in our optical laboratories such a setup. It is a little bit more complicated than in the upper figure. With the setup we are able to obtain the dependence of the refractive index module of the diffraction efficiency on time. The diffraction efficiency is good for characterization of holograms or diffraction gratings but if you want to characterize the recording process it's better to evaluate the parameter which is called refractive index modulation. It is done with the well-known expression drive in the Kogelnick's coupled ray theory. The dependence of the refractive index modulation on time we call grow curve. The advantage of the grow curve is that it is thickness independent and angle independent. We use it for the characterization and a typical grow curve is given in this figure. There are several phases. The first one is a short induction period. Then a grating start to grow. It goes through its inflection point. When we switch off the exposure there is some post-exposure growth. After the post-exposure growth there is a final period. In the final period usually the refractive index modulation remains stable or it can decrease due to some degradation processes. Such a curve or the course of the curve depends on many parameters. Let's say exposure parameters like intensity, exposure time, spatial period of the interference field and address and also on the chemical composition. For the photopolymer with nanoparticles I have measured several dependencies. In the first figure there is influence of different recording intensity and in the second there is influence of different spatial period. What can we say from these measurements that it's better to expose the materials with higher recording intensity and the material is not very good so far for short spatial period. These effects are due to the polymerization system. We would like to optimize it to obtain some better results. However, we did some measurements with the scanning electron microscope. I hope you can see that there are fringes of the refractive index grating. I have to explain it. This is a piece of glass. This is the recording layer and these stripes is the refractive index grating. If we magnify the layer, we can clearly see that there is the redistribution of nanoparticles. In the presentation I want to explain to you how to holographically move in with nanoparticles in the recording layer and how to prepare quite efficient diffraction structures. Thank you for your attention. Thank you so much for this presentation. I have a question. Was it the silver halar particles that you embed in the polymer is 30 nanometer? Yes. Are there any other questions from the audience? I have two questions. First of all, have you had a chance to do a study of the diffraction efficiency as a function of spatial frequency of the grating? I would imagine that you might find that you might get some optimal performance with different spatial frequencies. The second question is, have you studied the sensitivity and how sensitive it is and over what wavelengths is it sensitive? I will start with the second question. The sensitivity of the material is about from 5 to 10 millijoules per square centimeter. It is designed for green wavelengths, but if you use a different photoinitiator, you can shift it to red or to blue wavelengths as well. It depends on the photoinitiator and they are available also for other wavelengths. We studied the dependence on the spatial frequency. I think it is this graph. You can see that there is a limit for the resolution that if we have the spatial frequency about 300 nanometers, then the refractive index modulation is low. It is not suitable so far for reflection holograms. It is just for transmission. Surprisingly, it works quite good with this material.
A new recording material with silver halide (AgBr) nanoparticles and photopolymerization system will be presented. It is well known that redistribution of silver halide particles within a gelatin layer is the main cause of a phase hologram formation in silver halide emulsions (SHE). Holograms recorded in SHE reach high value of the refractive index modulation as the difference between refractive indexes of AgBr and gelatin is relatively high. However, the AgBr nanoparticles in SHE may increase in their size during the wet chemical developing and rehalogenization processes. So when the SHE hologram is reconstructed, it appears noisy as the size of scattering particles grew up. Recently, we have developed a new self-developing recording medium with AgBr nanoparticles. It is composed of the gelatin binder with AgBr nanoparticles to which a photopolymerization system is added. During the holographic exposure, the radical chain polymerization process is initiated in the bright regions and polymer chains grow. As a consequence of the local polymerization process, the nanoparticles are excluded from polymerization regions and migrate to the surroundings where the local refractive index is growing up. The final result of nanoparticles distribution is the same as in the case of the processed SHE, but the AgBr nanoparticles in the gelatin layer preserve their original size as they don not undergo any chemical reactions. In our laboratory, the AgBr nanoparticles in gelatin sol are prepared by the method of chemical precipitation. Typically, the diameter of particles is about 30 nm and they have relatively narrow size distribution. The gelatin with nanoparticles is a basis for making of both SHE and photopolymer with nanoparticles, but different additives are used for respective materials. We have studied the recording processes with proper detection methods which lead to the redistribution of AgBr particles within the recording layer. In the paper, we will give the main results of our findings and also some properties of the new self-developing photopolymer with nanoparticles will be presented.
10.5446/21057 (DOI)
So I'll get the show on the road here real quickly. So this is the paper that I wrote for this presentation. Whoa. OK, right off the bat, it's going to fight. Oh, great. Oh, go away. F off and die. You should have a button like that on these damn things. Anyhow, it's upon a time I worked at an embossing company, and there was a real need to figure out what was going on throughout the production cycle and measures of relative brightness and stuff like that. So there was a debate about getting equipment, writing special custom software, and having special machine dedicated to that just to make sure that the ISO 9000 one manila folder is well padded. But we kind of gave up on that. Nobody wanted to cut loose with the funds, et cetera. So I was thinking about that problem and how it's applied to the digital age, and things are kind of maybe making things easier now to do that in real life. Let's take a look at some of the old ways. I hope this is the laser pointer. Maybe it isn't. There it is. Oh, very good. A red one. Hey, there's blue beams there. So if you have a simple gratings, two beams interfering, it's fairly easy to measure the diffraction efficiency of something like that by putting a beam through the finished grating. And then you can see the two orders, the diffracted orders, plus and minus one in the zero order. So you could put your detector in these positions and figure out how much light has come through the grating and been diffracted. Sometimes some papers will have a listing of how the zero order is depleted, assuming that whatever is not in the zero order has moved into the diffracted orders. Lots of times, though, a lot of light is lost to scatter. So that's not that efficient of a way of getting, collecting all the light. But it is kind of rough because there are different backward going diffracted orders, et cetera. So for this paper, I was going to use, instead of diffraction gratings, because that's not so much fun, is to look at pictorial holograms to see how good the materials are with regard to diffraction efficiency and signal to noise ratio. And just like we saw earlier in that previous slide about looking at the extinction of the zero order, here is a plate on the Sphere SGO3 emulsion distributed by G. Ola recorded in the green and reconstructed in the green. And you can see the zero order depletion in the shadow of the light, this being the longest exposure and the brightest one, or maybe the noisiest one at the same time. But the light, instead of going through the plate, is diffracted off as the image light. So this is one way of figuring out the diffraction efficiency of reflection holograms, except, of course, that it doesn't take into account the scatter. So I was going through some old papers in one of the books that Hans edited about holographic recording materials, and also a way back when, in what, 1980 or so. These guys went ahead and tried to analyze the images of holograms, of reflection holograms, not just diffraction gratings. So they made holographic images of the traditional 1951 USAF target. And they would then photograph that. And then look at the photographic negative by measuring the density in different parts of the photographic negative using a densitometer like that. So they'd have an actual piece of film measure how dark or how dense the bright areas got compared to how clear the dim areas of the negative bars were. And then they could get some realistic measurements of, well, their problem was finding best developer constituents. So that is kind of tedious because you have to take the pictures, make sure it's in a controlled manner. You also have to know the D log E curve of the film because they would get a density number, and then there would be a corresponding exposure intensity. So if they had two densities, they could see what was the relative brightness between two spots on the negative, which corresponds to relative brightness in the hologram. So if some of this stuff that I mentioned, the D log E curves, f stops and stuff, it's kind of photographic. Usually in photography, things work, especially with the stop system because of the way our eyes work, where there is a nonlinear response, where our eyes are looking at the brightness ratios of a typical gray scale here, where there would be numbers assigned, but because of the way the eye works with the ratios, they use logarithms. But we won't get into that. Because what I had to do is because now we're looking at the digital age. So thanks to the school I work at Harrington College of Design, I teach in the digital photography department. I am the math science department. And so I got the chance to use this $8,000 Nikon D3X, check it out from our cage. And that would be instead of having developed film, I would use this nice big full sensor. And for the photographing, I also got to check out this wonderful Nikon close-up lens. And you can see on its focusing scale, it has not only distances and meters and in feet, but it also has reproduction ratio. And I constantly set the reproduction ratio to 1 to 3, where the image is 1 third life size on the sensor. Because I have on my website over 200 different exposures evaluated of a variety of contemporary photographic or holographic materials. And they were all photographed in the same way. So if I needed ever go ahead and do stuff like crazy, it's insane as this, and get numbers out of them, I can. So here is the setup, the basic setup. I had to substitute a film Nikon in the place of where the digital Nikon would go, because I took this picture with the digital Nikon. So I have a light source over there, a certain distance. And then we can see some of the tools of the trade that are used in my standardized setup here to get the reconstruction if I'm playing these holograms with white light. I put this piece of cardboard, which is the same size as the holographic plate, in the object on the table. Make a mark where the shadow is cast by the nomen on this little sundial. And then when I bring this into the photographing area, I play around with the light until I get the same distance or the same size of the shadow cast. Over here is a thing called the Kodak 18% neutral gray card, which is also part of the calibration setup, as we will see shortly. So here is a kind of a pathetic looking hologram, but there it is in the setup. Besides the gray card, I have this black velvet. And so when the gray card is properly exposed, this black velvet goes almost completely black. And in some of the samples, you'll see some of the dirt and dust and other defects on the black velvet. Also, too, to make sure that I don't get extra light in the edge of the plate, I have a little blocker here so that it casts a shadow on the edge of the plate. Sometimes I forgot to do that. You might notice that in the pictures. And it's sort of a suspenders and belt approach to this. Well, of course, if there are going to be ISO 9001 certified, you have to have references. So the Nikon, when it's looking at a gray card, filling up its full format, it's going to calibrate it to give the proper exposure if it looks at the gray card. This incident light meter I bought in college and never had to put batteries in because it's, I got a selenium cell in it, was also used to measure the light intensity at the holographic reconstruction plane. And basically, the camera and the light meter agreed. So I had something that was calibrated or standardized there. So in the paper that I mentioned after the photographic equipment, you need a decent image processing kind of a computer, which most are capable of. And this slide here, I'm showing something that was a little bit of a problem. And that was the incredibly large file sizes I was dealing with because with the raw or Nikon electronic file, full sensor chip, I end up with like a 26 megabyte image file, the camera simultaneously takes a JPEG, which has all kinds of compression, which gives us all kind of goofball artifacts. That may be giving you erroneous readings, although those files are rather small, about 1 third of the raw file. But the most accurate, although it's also the most clunky of these files, is the tagged information file format, the TIFF. And notice it expanded to three times the size of the raw file, which has some sort of a compression thrown into it. But this is the most accurate way. If you ever send something to print, it's what they want because for every pixel, there is a very accurate value and no tinkering around with compression. So here is a view of Photoshop, which is pretty ubiquitous. Any other kind of a paint or bitmap type of editing program could be used. Here I have a whole bunch of different photopolymers up here. Another, the tool that is used in Photoshop has been around since day one in Photoshop. And it's this little tool called the eyedropper. So it's always been there. Apple Eye or Control Eye is the shortcut to the eyedropper tool. Here is another kind of a program that you may want, especially if you do something as insane as me. And that is Adobe Lightroom, which is a digital photo database. So here you can see a few of the hundreds of exposures I've made. And the beauty of Lightroom is that it can take all of these raw files and then export them as the TIFFs, which is what we're going to use to do the evaluation. It can do a bunch of them simultaneously, like it's up to do 67 at the moment there. And so you can take the raw files and then go ahead and save them as these TIFFs, the most accurate of those file formats we were looking at. Another great thing about Adobe Lightroom is its key wording feature. And here you can see some of the key words that I throw in for each one of these photographic exposures of the holographic exposures. So Lightroom and Photoshop can work as a team. There is for, instead of making gradings, which are always fun to have, rainbow projecting devices, et cetera. Sometimes you need to look at objects. So I have a standard object that I've been using, so I don't know, for the last 30 years almost. And it is a waffle iron painted with Krylon number 1401 bright silver. And I use that because it's pretty reflective. And you can get really good interference fringes because the pigment in Krylon bright silver is aluminum flakes. Sometimes if you're real lucky with this standard object, because there's this pattern that's consistent throughout the object. So if I put four different exposures on one, they all have the same details. There's three ball bearings that are holding the plate kinematically. The waffle iron is tilted like this, so the plate falls onto this bar here, which also prevents light from getting into the plate through internal ringing. So we don't make a hologram with the inside of the plate. Your object move, no, this is how good I am. I can put that back on there and get real time fringes, hit the back of the waffle iron with a hot air gun or my finger even, and you can see the fringes creeping. And no, anybody can do this. It just takes a little patience and makes sure that your hologram replays it exactly the same way as your wavelength. So here is the usual kind of a system I do when I first shake down a material. I make four different exposures, and they're doubling or having each other, depending on which way you want to look at it. Because again, we have to go back to those gray scales, where there's ratios that revolve around doubling or having, which photographers call stops. So you can see this is what you got to do. You never make photographic prints. You're always making test strips. When it comes to holography, you got to do similar things. The business about this Krylon number 1401 Bright Silver, it's a pretty amazing paint because it preserves polarization. Many people don't think that. You can have a diffusely reflecting object that preserves polarization. But in this configuration here, the two lasers, the helium neon, it's vertically polarized, and the little compass is vertically polarized. But then they're introduced to each other inside of a polarizing beam splitting cube. So the cleave is in this plane. So the helium neon vertical polarization is reflected. So the helium neon beam is vertically polarized going downward. But in the case of the green beam, which is also vertically polarized, it hits the cleave and bounces off. The vertical polarization is here. The horizontal polarization of the heenie is passing through the cleave, while also the horizontally polarized light, if I have a halfway plate in there, which I do, comes through the beam cube here. So the two beams have orthogonal polarization on the two different wavelengths. So what we'll see here is the two different polarizations are combined and hitting the object on the left. And you can see kind of a greenish yellow, because the green was a little bit brighter than the red beam. But on the middle picture, I put a polarizing filter that was oriented to extinguish the green. So you see the red image. But the white cardboard in the background does show the green and the yellow. And here I've extinguished the red light by taking my polarizing filter and putting it at right angles to the red beam. So the green passes through. And again, it's sort of the yellow in the background from the diffusely reflecting white cardboard. So in Jolie and Van Horbeek's case, they had this D-log E-curve density versus exposure. So I had to generate something like that for my digital camera. Photoshop, instead of using density numbers, they use these values based on an 8-bit system, 256 numbers. Hey, it's only going to 255. Well, 0 is counted. So there's some 0 to 255. There's actually 256 numbers. And then they break this down into red, green, and blue components because that's the way our eyes work. So what I needed, because I don't find anything about what goes on in the inside of digital cameras, I had to generate my own exposure input versus brightness output curve. So I used the Kodak 18% gray card and photographed it with the camera. And I exposed and overexposed the gray card sitting next to another piece of black velvet and a white piece of paper. So there are many other exposures on either sides of this chart of this screen here. But I started where the gray card was so overexposed that it photographed as the white paper. And on this end, the gray card photographed as dark as the black velvet. So I could then use taking those images, I would go to exactly the same place on those images and use the eyedropper tool, even though the point samples checked off. I used the 5x5 average because thanks to speckle, dirt on the objects, and other irregularities, it's best to kind of have a larger area to sample from and get the average. So I took the readings of exactly the same point of each of those gray cards. The way that the eyedropper info pops up is that you have the red, green, and blue values. Right here, I'm using a variation of the eyedropper tool called the color sampler, where you can have four different readings posted at the same time. Also, two in the eyedropper info is x and y. And this is how you can, with some degree of accuracy, always be sampling exactly the same spot in the photographs, because you can look at the x and y coordinates. When I did the screenshot, that stuff disappears. But it is in there, so you can be rather consistent about what point you get. So here is the table. The exposure numbers, I don't have what f-stop and shutter speed I use. This is the exposure at threshold, where red, green, and blue of the gray card are all at zero, just like the black velvet, all the way up. And it was over 21 different exposures, where the gray card saturated to do the brightest value. Each one of these numbers is not one of stop, as it's called, where the light intensity at the film plane was doubled. It's by third stops. So this is a certain exposure. This is 1 third of a stop brighter, 2 thirds of a stop brighter, 3 thirds or one stop brighter, 1 and 2 thirds, et cetera. So this camera only has a range of, from threshold to saturation, of only what's seven stops, which is much less than photographic film. But it still looks pretty good. So I made my own D-log E-curve, which doesn't look too different than a film D-log E-curve, because there is a little bit of a toe, where it's non-linear, a straight line, and then a shoulder, where it starts tapering off as it gets toward saturation. So, am I at 9, 10 seconds left only? Or is this thing screwy? All right. So here is using this tool here. This is four different exposures on one plate. And I have my color sampler here, where I have in the spots where I sampled are pretty much the same little divot on the waffle iron. And what we can see here, OK, let me just go back to one thing. What I did then is each of these exposures are 1 third off. And then we can see that when we're in the linear part, more or less, we see that there's a change of 19 of these Photoshop numbers, as the exposure is increased by a third of a stop. And when I go a full stop, there's 57 of these Photoshop value points. There represents one whole stop, a doubling or a having of intensity. So I made the D log E curve. So now when we look at this, I look at these four different exposures. Each one is twice as much as the other. So when I go from one of these to the next one, I look at the different readings on the color samplers. And I can see how much brighter each one of these is by subtracting these readings, one from the other, and then converting that to how much it is brighter in stops. You could also work in percentages too, but I'm more used to using stops. Here is a signal to noise ratio reading. This is a Gentay Ultimate U08 blue green plate. And this was 800 microjoules per square centimeter exposure. This is 1600. He always says when you read his instructions, he says look for when the plate is like a blood red. And then when it goes black, like this one did, I could see in the developer when kind of a black silvery look, that's definitely overexposed. So I looked at four different spots on these plates. The shadow areas for the noise. And you can see there definitely there is a bit of more noise here in the shadow cast by the ball bearing. And then the center part here is where I was getting the signal. Actually, this is not a signal. It's a signal plus noise. So I have to subtract out what's going on in the center part here versus what's in the shadow area. So here are the different numbers that I get from the eyedropper tool. And then to the total luminosity of the noise is the sum of the RGB readings for the 800 exposure. This was those numbers. And for the other one, it was those numbers. The ratio of these two sums are almost doubled. But the relative brightness is found by subtracting these numbers. So that 82 is the difference there. And this part is linear. But then to change it into stops, I have to divide by 57. So it's off to 1.44 times. 1.44 stops brighter. And then if you want to find out what it is in normal numbers, not stops, you have to take 2 to the 1.44 power, which is how the stop system is derived. And then we find out that this is 2.7 times brighter there. So this is the other one. So we figure out what is the signal from these Photoshop readings. Is that what we're doing here? Maybe you guys can read it easier than I do. But this is giving us the difference here of how much noisier one is than the other. So it's obvious you can see this. But now you can get some numbers out of it. Here is another little test that I did, bandwidth and shrinkage. So this was something with this sphere SGO3. Dr. Zhang, TJ, was always talking about how you can, instead of you can skip the formaldehyde step on the Soviet emulsions by simply dunking the thing in cold water for about a half minute or a minute before processing it at low temperatures. Hans, I'm sorry. Well, we know we did this. More fun with Hans and Ed on my website is the whole story about this escapade. So Hans was always saying you use the formaldehyde, and then you see WC2 as the developer. But you can see there's definitely a difference between the two. These are all three exactly the same exposure. This one is the same exposure as the others, but it does not have the formaldehyde pre-hardening step. Same developer. It was developed simultaneously with this one. You can see that the formaldehyde or some other gel hardening step is important with these kinds of materials. So with this one, bandwidth and shrinkage, when we look at, no, great, something screwed up on this one. It's always my luck. Anyhow, oh, I know what's missing, the picture that's missing here. What's missing here is that, holy cow, something is, oh, OK. What I did is I photographed the gray card under laser light on the waffle iron, and then there should be a red equals zero, or a red equals zero, oh no, this is three different exposures. And what happened is that the ratio of the green to the red is usually twice as much. And you say, wait a minute, you just use green light to make this exposure. But the way the photo materials and our eyes work in discriminating different colors is by comparing different red and green and blue values. Because if it was all maxed out at green, then you wouldn't be able to discriminate between different shades of green. So what it looks like is that for 532, the camera sees it is 532 green as a combination of twice as much green as blue. So we can kind of get some different kinds of, we can see how true to the original color these holograms are by looking at the ratio of the red to the green. Of course, this one has zero. So that's pretty true to the color. This one, the pre-hardened one, has a little bit of blue coming through, some noise or something, or some emulsion shrinkage or chirp. And this one, certainly, you can see that the numbers here show that this has definitely shifted to the blue. Then here is the last bunch of stuff. This is absolute efficiency. So here is at 458 samples of PFGO3C that Jesus Lopez told me was the very best batch of PFGO3 he ever used. So here is the actual object itself under blue 458 light. And you can see, again, there is this goofy thing here where there is a green component and a blue component. Looks like the blue is about 50% more than the green component. So here is an exposure of 12.8 millijoules per square centimeter, 25.6 millijoules per square centimeter. And you can see there is some difference, especially in the dark areas, the shadow areas. So when we look at these numbers here, but the eyedropper on the cross piece here, I get these readings for the actual object, for the reconstruction of this exposure. We get certain numbers. And then when we look at the longest exposure, we see that, hey, it's 100% efficient. The green and the blue values are the same. But that's not true because this is taking a signal plus noise reading. So I had to go back with the eyedropper and look at what the readings are in the shadow areas here, subtract that out from the shadow areas. So I'm going to have to take those readings, subtract that out from the signal plus noise to noise to get the actual signal readings. And we can see then with the object here, it's got a green component of 150, blue component of 254. We subtract the 254 minus 213. We got about 30 there or 40. OK, don't forget. That's the science department in an art school. So when I divide 41 by the magic number of 57, I get 7 tenths of a stop, which is about 1.6 if you're going upwards or 60% going the other way. So this hologram using this technique is shown to be 60% efficient. So there's my website, National Losers University, I mean, National Lewis University Technology and Education. I just finished after 35 years of teaching my master of education. And this is my portfolio and my holographic website, which was part of the grad project. Then this is my phone number. This is my email. Don't email EdWesley at gmail.com because they'll get my son and then who knows what, you'll get back. And also notice my last name is spelled L-Y without L-E-Y. So any questions? Thank you. Thank you. Yes. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you for these materials. What are the materials you say are the best? The very best materials I'll have to say are, when Gentay's U08 for the blue-green, I didn't try any of his red or pink chromatic, is tied with the Sphere SGO3 because when I had some other things side by side, they were doing pretty much the same thing. And you can see them visually. So I have samples of these things if you want to take a look at them. I didn't do any readings. One question here. Yeah, Ed, can this technique be used to measure the brightness of holographers at the party in the dorms tonight after 8? I don't know. At the worst floor of the dorms, you'll find it after the videos from 7 to question mark. All right, thank you.
Measurement of the characteristics of image holograms in regards to diffraction efficiency and signal to noise ratio are demonstrated, using readily available digital cameras and image editing software. Illustrations and case studies, using currently available holographic recording materials, are presented.
10.5446/21061 (DOI)
Oh, hello everyone. I'm going to read my paper to keep in time. This paper discusses the conceptual underpinnings, working processes and the tools used for preparing the scene files of a holographic artwork for this exhibition at this ISDH, which offers a subjective viewpoint on the idea of homeland. The artwork homeland is an optically formed from digital hologram which is contextualized by the holographic maps used in situational awareness and it indicates its subjectivity by strongly referencing the human body, particularly the lines of the palm of the hand. I have to read the caption which has cut off the bottom of this image. Zebra Imaging Liaison Officer briefs soldiers on the capabilities of tactical digital holograms. Soldiers deployed to Iraq and Afghanistan have been using TDH since the inception of the tactical battlefield visualization program in 2006. Copyright Zebra Imaging Inc. do not use without written permission. As a background, the military applications of technologies have long been a part of the discourse of art practice engaged with these technologies. But until recently such issues of military agency were not applicable to the field of holography. Now it's widely known that holograms are playing a role in homeland security. A recent article in the Army Times states, this three-dimensional representation of complex, urban, non-urban terrain, subterranean bunkers and other combat and non-combat environments aids the warfighter through providing high-resolution 3D battlefield infrastructure intelligence. This is critical for planning and executing combat and non-combat operations. According to Lynne Schno, the Army Intelligence Chief Information Officer and Director of the Intelligence Community's Information Management Directorate, so far about 12,000 holographic images have been sent to soldiers in Iraq and Afghanistan. Any unit preparing to deploy can ask for one to be custom made for its area of operations. Run by the Army Deputy Chief of Staff for Intelligence, the tactical battlefield visualization program provides soldiers with these holograms made by Zebra Imaging in Austin, Texas. Conceptual underpinning. Dora Appel's article, technologies of war, media and descent in the post-911 work of Christoph Fedechko 2008 provides a view of some of Christoph Fedechko's artistic works which engage with the idea of homeland. And I quote, in addition to its formal intertwining with media technology, the multiple environments of Fedechko's installation demonstrate the infiltration of war technology through the trope of surveillance. Conceptually, the project addresses the effects of contemporary war culture and the incursion of the heightened power of the state into every kind of domestic or homeland space, creating a perpetual state of hyper vigilance in which the homeland is always ready, mobilized for war. If military technology can be domesticated, domestic also becomes militarized. These ideas about homeland provide a framing context for the overarching experience of viewing Christoph Fedechko's work which is in fact one of deep empathy and compassion for all people. Such an experience of empathy and care is also the objective of my homeland art project. The homeland art project puts on display in a museum or art gallery the way of looking associated with holographic tactical battle visualization in order to propose homeland as a personal subjective domain. This is accomplished by shifting the area of operation to the space and time of human life. The homeland art project image was made using the same technology and is displayed in the gallery with the bunker purpose designed rugged grasshopper lighting stand with rotating turntable used by the military to closely examine holographic images. As the Army Times article states, using these hologram soldiers can improve their understanding, retention and situational awareness of the areas of operations. Leaders reporting they used holograms to better analyze, assess and determine different courses of actions. Inviting this very keen awareness in the art gallery viewer the project visualizes homeland as the terrain of traces of the hands of many humans. In a phenomenological sense this visualization suggests homeland to be the traces of life registered in the fabric of the body. In the words of the philosopher Marisse Merleau Ponty, the thickness of the body far from rivaling that of the world is on the contrary the sole means I have to go into the heart of things by making myself a world and by making them flesh. The process in particular and pictorial agency of the work. The base ground of homeland art project image is green and looks similar to the military terrain holograms which are monochrome green. However there are significant differences. The terrain of the homeland art project is made in an entirely different way from an aggregate of lands which were formed by directly casting into the palms of several people's hands who were asked to imagine they were holding a piece of light. These positive shapes with their complex organic ridges and valleys are the materialized form of an invisible place, a personal and subjective homeland which people carry with them every day. In contrast to the significant reduction in scale and resolution of terrain elements in the military holograms, sufficient palm casts were made to cover the 60 by 60 centimeter area to enable all of them to appear life-sized. Each silicon cast was photographed and made into a 3D form using the Agisoft PhotoScan software and then these forms were assembled in Maya. The prominent spatially positive path of the lifeline is obvious on each of these casts. The lifeline is unique in the repetry of lines encountered in the world in as much as it does not denote a physical edge or a boundary of a 3D form. Rather it is a body marking which is integral to the body. Its contours and length have a questionable but long established association with life prediction potential and mortality. Apart from the terrain itself, sometimes for clearer military pre-visualization two-dimensional lines drawn on to acetate are sometimes overlaid on the hologram to indicate approach paths. Obviously when drawing in two dimensions, certain types of notation for special representation can be used such as creating a line to form the boundary of a space or using projection systems such as perspective. But of course once lines appear in three-dimensional space, these techniques are no longer appropriate. So this is because as the viewer moves, the mobility of their line of sight makes available numerous possibilities of composition from the 3D line relationships. And as Emodex derights in her introductory essay to vitamin D new perspectives in drawing, drawing exists at another level within the human psyche. It is a locus for signs by which we map the physical world but it is in fact the preeminent sign of being. Therefore drawing is not a window on the world but a device for understanding our place in the universe. Hand drawn line is used in homeland art project to trace the potent subject of the lifeline using the haptic phantom interface and holo shop software. Lines were drawn by feeling the paths of the lifelines on three-dimensional templates made from the palm casts. A major pictorial objective of holo shop is the assignment of appropriate visual characteristics to gestures. The phantom device is used to capture the gestures made by the artist. It is a haptic device which allows a user to import three-dimensional spatial movements using a pen like grip. The grip is attached to a mechanical arm with six degrees of freedom hence it is capable of recording all 3D movements of the user. The device is also capable of generating a synthetic force field based on the computer generated 3D graphics. Various types of synthetic force fields such as magnetic field can be programmably generated based on 3D geometries. For artists to fully express their 3D spatial movement using this device the holographic software needs to be able to capture various physical characteristics of those movements through adjusting the phantom device's mechanical parameters. They are angular and linear tolerance force stiffness and damping parameters. These parameters will determine the physical response of the phantom device when it interacts with 3D virtual objects. Those parameters also influence how much detail of artist's movement is captured. Hence these parameters serve two purposes. Firstly adjusting the level of detail of the artist's movement and secondly adjusting the tactile feedback. While these parameters which are directly associated with the phantom device play a certain role in determining the range of expression lines can convey there are other factors which have significant impact on how artists' movement is expressed and they are firstly an encoding method of the speed of the artist's hand and secondly computational constraints applied to the movement of the phantom device itself. In velocity mode in order to truthfully capture the characteristics expressed in the form of speed of the artist's hand, Holographic software has implemented methods to control the thickness or the width of the drawn line depending on the speed of the hand movement. This method transforms the velocity value of the device into width of a line using a sigmoid function whose minimum and maximum values correspond to the minimum and maximum widths of the line. The normal velocity mode will apply a normal sigmoid function to the velocity so that the faster the speed of the movement, the thicker the line becomes. And we also have what we call reverse velocity mode which allows the user to produce a thicker line when the speed is slow and a thinner line when the speed is fast. Magnetism mode is another factor which significantly influences how the artist can effectively work in the 3D environment and it gives a provision for spatial constraint on the haptic device. Without any spatial constraints an artist will be able to move the device very freely in space to draw lines however if the lines need to be drawn with respect to other geometries in the same space having the six degrees of freedom often actually provides too much free movement. So this usually results in the user not being able to have total control in determining the location of the haptic device in the 3D space. So in order to reduce this inconvenience Holoshop software exploited the virtual magnetic field to constrain the movement of the haptic device. When this virtual magnetic field is turned on the space where the tip of the haptic grip can move is restricted to nearby surfaces and this function is particularly useful if the movements of the artist hand need to be guided by the underlying geometry template. With the provision of these parameter controls and different line drawing modes to capture the spatial movement of the haptic device the changes of movement, momentum sorry caused by the topography of the contours and the speed of the gesture were registered in the varying widths and orientations of lines. The lines in the homeland hologram are flat red ribbon like referring to the approach path planning lines yet they strongly evidence the inflections from the change in direction and speed familiar from direct handmade marks. The red colour of the lifelines is symbolic of the luminous presence of life. The lines were translated at varying heights above the lifeline templates from which they were formed and when translated and viewed directly from above it's possible to superimpose only one line at a time with its lifeline. As the viewer's line of sight changes the amount and type of occlusion between lines at different levels obscure or open sections of the terrain. The depth of tactical battle visualisation holograms is intentionally restricted to enable the subject matter to appear sharp. Generally lines superimposed 2D on the hologram picture plane. The red 3D contoured lines of homeland art project are translated at various distances from the holographic plate so they range from sharp to blurred. This is similar to the way in which the artist Jim Campbell has used low resolution in his work. In this work the sliding scale of blur in homeland art project acts as a metaphor for the distance in its objectification to be gradually dissolved into intangible softness and closeness. Just for one second I will show you. I just, this test will be up the back you can see but you can see that obviously hologram is quite, these are three lifelines superimposed over three tests. Is that better? You can see the blur is quite large still and yeah. So you can kind of see that you get a range of effects with the lines dissolving out. Okay I'll put it down if you have a closer look at it later on. So as the red homeland lines become less opaque the green background shows through reminiscent of green underpainting lending vibrancy to painted flesh. These dissolving intertwining lines conflate the rational purpose of line as intention with the predictive association, these dissolving intertwining lines conflate the rational purpose of line as intention with the predictive association of the lifeline and destiny and invite the viewer to reach out and touch. As Diana Perthabridge notes in the primacy of drawing histories and theories of practice the linear paths that the spectator interpreted directly perceives or infers in the drawing constitute cognitive mapping. And such readings are inseparable from the effective response to the gestural trace of the hand and the echo of the body as well as the expressivity of the topic or subject matter of the drawing or its absence. The difference between these readings constitutes the surplus of the drawing within which meaning is constructed. And to conclude the expressive potential of drawing is broadened by the availability of drawing tools which enable the freehand inscription of space with line. Holoshop software which in conjunction with haptic interface fandomomni to feel virtual contoured surfaces provides a means of using forces to modulate line quality through velocity damping and friction. Future work in Holoshop will focus on expanding the scope and understanding of haptic handmade three dimensional drawing for holograms and other three dimensional displays. The use of Holoshop software and Zscape technology developed by Zebra Imaging enabled a subjective and personal visualisation of homeland which unites people in an awareness of the mystery and precious nature of life. I would like to thank all the people involved. Do we have questions for Paula? Paula, how accessible is the haptic system and how long does it take to get some comfort with it? I can't imagine that the drawing is sort of easily learned or how did you feel about it? Well I'm very, Sally, thank you for that question. It's my wish that it will be very simple for people to draw who have had no experience whatsoever that it will be incredibly intuitive. And I'm happy to say that Massa Takatsuka's son, five year old Charlie came into the lab the week before I came over and I asked him if he wanted to draw. We only have five colours and we have a line that gets wider, narrower or a straight line and Charlie was actually able to grab the pen and draw little monsters with arms and I was altering the linear tolerance so he could do details with eyes but I think by the time the project ends which is another three years I hope that we will have some really great tools for everybody to be able to create content.
This paper discusses the conceptual underpinnings, working processes and the tools used for preparing the scene files of a holographic art work which offers a subjective view point on the idea of homeland. The art work, Homeland, an optically formed fringe digital hologram, which is contextualized by the holographic maps used in situational awareness, indicates its subjectivity by strongly referencing the human body, particularly the lines of the palm of the hand.
10.5446/21062 (DOI)
Okay. So, this is a long title for my presentation. This is part of my reflection, my studies for a PhD that I'm doing now at the University of Quebec in Montreal in art and creation. So it's partly creation and partly theoretical and I'm doing, maybe, I don't think I'm going to say something so new, but I try to find new ways of helping for better perception of what can holography bring to the art milieu, to the understanding of artistic creation in holography. And for this presentation, I want to speak of this so particular visual quality, appearance and disappearance of the image that everybody knows, of course, and compared to the question of the perceptual and cognitive processes. So I will do a small insight in another field, that mine, which is cognition, very short. So what could be said is, well, to resume it maybe, we know this important of the importance of the reality, co-constitutional phenomenon that always emerges through the act of perception. And from that, I found something interesting reading a little bit about what Francisco Varela developed with his paradigm of inaction, the inactive approach of cognition. So I tried to do a kind of association of idea or to do a metaphor, maybe, to help in speaking about what's happening when we look and when we appreciate aesthetics of holography. So first of all, once again, about holography and perception, something that we know already, but just to remind us that, of course, this part of this half century that has passed in the invention of holography, there is in the artistic and cultural community a kind of major misunderstanding which persists today as to what holograms can contribute on an aesthetic and conceptual level. And too often holography is seen today as, again, as an extension of photography, which is an aesthetic determined by a completely different historical context, I think, in the arts and sciences. And at a time when a fixed point of view and monocle perspective was very sufficient to satisfy our need of veracity and reality. Today, of course, we know reality appears much more complex, as Richard Brooks said very recently in his presentation. So today, but today there are many other forms of artistic expression than holography which are interested in that complexity of the perception of the world. This artistic expression trying to explore, sometimes perceptual questions, sometimes the practices of art try to promote this through interaction and immersion, for instance, the what we call immersive practice of art. In holography, I think we have something which is related to those other practices of art. And I have no doubt that the aesthetic qualities of holography are among the clearest manifestation of this contemporary complexity, epistemological complexity. But it's true that a few people, maybe apart from us, but from the artists and the scientists who are directly concerned, have seen holography as an extraordinary means for examining the unstable and transitory nature of our perceptual relationships with reality. That's what I want to insist on. We know it's still often associated with mere illusion, but if it is so, it's because we must, we have the task, again, of profound reflection among art specialists and also to do a kind of education amongst the public. We have to do it. It's not done yet completely. So that's why we're all here. I know. Illusion has too much direct connection to a representational position. And this is indeed the starting point with the principle of this junction between the world and our knowledge of it. So we have to overpass that. And it's not new. I know, for instance, in 99 in Leonardo magazines, the German essayist Peter Zeck already wrote an article, which among others talk about that. He spoke of aesthetic interest towards the phenomenal aspect of light in itself, which will help for understanding better holography aesthetics. So nothing new in this area, but it's important to remind us. And what I see also, this is a kind of approach in holography, which is very close to some artistic position taken in the 70s already by artists like James Terrell, who at the time already paraphrased Marshall MacLean. Assertion perception is the medium. So why do we, why? Maybe because we do not completely accept this reality that this reality might be fleeting. So this one reason maybe which may help us to understand why holography is so difficult to accept. This is one of the reasons, because there is other reason, of course, for holography acceptance in art media. So popular ideas have deep roots. We know that. And we can see it even when artists like as James Terrell did a very large hologram that John presented us recently, and which has not been so easily accepted by the art critics. So we have to think that maybe there is not a way to present holography, of course, rather than only the 3D photography, we should maybe say something else about the aesthetic value of the perceptual and cognitive nature of the holographic image. Yes, we know holographic image is neither painting nor photography. It's not exactly a retinal persistence effect as in cinema, nor does it project images. Not exactly. It doesn't use apparatus that imposes a speed at which the image is passed before our eyes. So we know also that its process is completely different from 3D cinema. But there's not so many people who understand exactly what it is and maybe journalists or that's just going on to say that some special spectral imagery, for instance, is a hologram. It's not. What we have to understand is that maybe one of the main important aspects of holography is the question of the time. We speak a lot of space, but the time in the holographic image is the time of the observer. A few people told that already today. I want to insist on it once again. This is a complete temporal identity between the image seen and the act of seeing, which make the holographic image greatly resembling a simulation of the phenomenon of visual perception itself, much more than it resembles a representation of the outside world, of course. This indeed, the restitution of an optical event, which indeed in our everyday lives remains imperceptible for us, for our senses. We can say that it's the visual recreation of a very special encounter. An encounter we know of the temporal gap produced between two beams of light at the moment of the recording. Right. This is also the encounter of something else. The encounter of our eyes, our gaze, and the light, the emergence of the light. Maybe, yes, we saw already about James Terrell, so I will pass more quickly. The time is the holographic image is the time of the observer. This is a temporal dynamic which is brought out from the physical encounter of two light beams, which bring something else, another physical or yes, physical encounter, the co-emergence, what I call the co-emergence of light and our gaze, which can only be experienced by each individual. Beyond the hologram, so the holographic image brings into play one's entire subjective comprehension. The subjectivity is very important to appreciate aesthetics in the holography. We have to think outside the pre-determined representational framework, even if we do figurative imagery, it's not a question of abstraction, of figuration. It's just a question of insisting to focus on the appearing phenomenon rather than on what is seen. The appearing phenomenon, so which brought me to think in the in-between field of artistic display, holography, and understanding of cognitive processes. It brought me to speak about this, the co-emergence of light and our gaze, or what Varela has called in some way, and I resume it of course, the concept of inaction. So the holographically experienced image is becoming aware of the human visual process. For this, if I try to understand what is inaction, I refer to a philosophical concept of in German hervorbringen, to make emerge, maybe, making the objects of the world emerge and making us conscious of them. This concept appears, we know, in the work of Martin Heidegger and Maurice Maloponte in philosophy and has been revisited by the neuroscientist Francisco Varela in his research into cognition. I find that very interesting for us. Varela remarks that whether we are in the face of the real world of that of images, it's no longer a matter of working with the simple idea of preconceived things, ready to be grasped as is by our vision or our brain. Varela, like Maloponte, insists on the act of dynamic correlation and the co-constructive effect that is constantly being created between the seer and the scene, or between the thing to be known, and the person who knows. So that's very shortly what brought to Varela the idea to the paradigm of inaction, and for him it was to bring out a new approach of cognitive processes. But I try to make a migration of this concept to my area of art practices. So what is in the US neurosciences? Why do we call that co-emergence or inaction? Well it is a neurologist based on the word inact, and it's a concept which is derived from biology to rethink the definition of cognition. In order to introduce to cognition the point of view of human experience and individual temporality. So we are back to the question of subjectivity. We call that also embodied cognition. And cognition is an inactive principle applied to vision, and is a co-emergence of the world and its image which he says inextricably linked to the history of what is experienced the same way that previously in existent path appears while walking. Or this is a metaphor, if we want to use another metaphor used by Varela, embodied cognition is the middle path between the egg and the chicken which as we know are correlative and define each other. We know that indeed there is not the egg or the chicken which are coming one before the other. So in the world, we are in the world that the world is enacted into us in the same time that the world is outside of us. So understanding the act of seeing through co-emergence is thus a way to formulate a certain vision of the world and of course as a result of our relationship with our knowledge of the world. There the gaze is always active in dynamic correlation and even in complete identity with the optical emergence of the image. The image I see before me is the real space in which my body is located and it emerges in me thanks to both light and my vision interactions. This is what happens in holography of course. So speaking like that is a metaphor. I know that it's not only in holography, it's not only an optical or perceptual fact, it's not only a poetically way of saying things. I think that it's a true state of making emerge that is manifested both in the act of perceiving a hologram and in theoretical reflection on what perceiving an image is. And for me, this is one of my older pieces with the psoedoscopic and autoscopic images on the sculpture, for me this question of the co-emergence between light and our gaze is one of the strongest aesthetic specificity of holography. And I try to use the idea of enactment, of enactment of light, image and gaze as I said as a strong metaphor for understanding this. On the level of pure aesthetic value, the holographic visual experience in a certain way conforms optically and aesthetically, partially, to what Marcel Duchamp described a century ago with his famous phrase, the viewer completes the work of art. Of course, it's not only that, I know. And in some way, we can keep in mind that it's a general trend in contemporary art in various that we find in various artistic practices that are using installation and new media, interaction, for instance. But in all of these practices, we find an effort to make as experienced the impression that we are in the world to be shared. We are in a world to be shared and not only to be represented. And in a way, in front of holograms, mainly large format holograms, of course, we share, we share the space, we share, and even if it's a figuration, even if it's not abstract, it's not only a question of representation. So in a point of view, the appearance and disappearance of the holographic image is even more emblematic than many artworks in interactive digital media for this question of dynamic of cognitive processes, because the way in which it is formally manifested is as phenomenal as the pure light, and thus it is closer to the visual cognitive process that we use normally. So there is no doubt that when we tend to forget that every image which is located at the back of retina is on the threshold of appearance and disappearance, holography indeed openly reminds us of this. It reminds us of some certain precarity of what we see, of what we think is the reality of the world. It's true that its evanescence, the evanescence of holographic image tells us quite directly that the visibility of things is always vaciating, appearing and disappearing. It is thus a medium that reveals the entire importance of our subjectivity and mental processes for constructing the image and making it emerge. And this is in a certain way our human dimension as perceptual and knowing beings. So through the holographic perceptual experience, the physical and psychological qualities of light intermingle to make emerging us the submerged part of our cognitive relation with reality. In that sense, holography never misleads us with respect to the precarious nature of our perception. It's not illusion. And this is done through a process world dynamic of shifting appearance, disappearance of images. Holography is not really a way to lie. So to choose holography as one artistic medium is also a way to adhere to a dynamic ontological posture which casts in relief the process world dynamic as a postulate of our relationship with the world. The holographic image thus becomes an embodied revelation of the precarious status of any image. Perhaps even Marcel Duchamp would have seen in it the imperceptible dimension of what he called the intrathing in between conceptual category because it's always ready to invert something or make it disappear. Marcel Duchamp, the idea of intrathing, was what is found at the minimal threshold of perception of the interface of two dimensions and at the boundary between the tangible and the mental. So I just want to finish with a quotation of Marcel Duchamp which might make us think about a bit more as a visionary artist. I simply thought of an idea of a projection of an invisible fourth dimension. In other words, that every three dimensional object which we look at coldly is the projection of some four dimensional thing which we do not know. So maybe perceptual and cognitive processes might be a fourth dimension or a fifth dimension. Indeed, another dimension that we make taken account strongly. Thank you. Thank you.
Strangely, light places us in contact with the things of the world even while keeping us at a great distance from them. It brings these things into our sight at the same time as our gaze gives us the impression that the world would not exist without it. The French philosopher Maurice Merleau-Ponty captured this dynamic with his idea of the intertwining of perceiver and perceived. Light is what links them. In the case of holographic images, not only is spatial and colour perception the pure product of light, but this light is always in the process of self-construction WITH our eyes, according to our movements and the point of view adopted. With respect to the visual regime of the work’s reception, holographic images vary greatly from those of cinema, photography and even every kind of digital 3D animation and are closer to the visual dynamic of sculpture or virtual reality. To a much greater extent than the persistence of vision found in cinema, this regime truly makes perceptually apparent the “co-emergence” of light and our gaze as we experience the former on a daily basis. But holography never misleads us with respect to the precarious nature of our perceptions. We have no illusion as to the limits of our empirical understanding of the perceived reality. But holography, like our knowledge of the visible, thus brings to light the phenomenon of reality’s “co-constitution” and contributes to a dynamic ontology of perceptual and cognitive processes. The cognitivist Francico Varela defines this as the paradigm of enaction, which I will adapt and apply to the appearance/disappearance context of holographic images to bring out their affinities on a metaphorical level. For it turns out that these physical and felt qualities of “co-emergence” are of great interest to artists and the contemporary world.
10.5446/21064 (DOI)
Thank you very much. Color is generally three-dimensional. If you want to describe, if you want to have a model that shows every color that you can see, however you look at it, it's a three-dimensional model. For example, this is RGB. So in the corner that, towards us, you see white, that's 100% of red, green, and blue. If you go to the bottom right-hand corner, that's 100% of blue, but no green and no red. So this cube represents, in a sense, all the colors that we can see, but it's three-dimensional. This is another way of explaining it. This is looking at the colors. Hue, saturation, lightness. This is, there are many ways of describing color. This is the Montsell color system. Again, it's all three-dimensional. It's difficult to handle three-dimensional models, because you can't draw it, you can't project it, and you can't draw on it. So it's good to have two-dimensions. What we need is a two-dimensional picture. So what we can do is to get rid of one of the axes, if you like, and the axis that you can, in this case, in the case of the 1931 CIE diagram, we get rid of is the lightness. So what we see is everything is the same lightness, but the hue changes. So that's good enough. So you can imagine this is every color, except it can get darker and brighter. So what you see there is every color that the eye can distinguish represented somehow on this diagram. The spectral colors, the pure laser colors or pure wavelengths, they go from bottom, from 400 nanometers that you can see the bottom, all the way around, on the left, up, and then down again to 700. That's the way it's, we weren't going to how this came about, but that's it. It's generally better to use what's the 1931 diagram is now what, eight years out of date. 76 diagrams only, 35 years out of date, but that's the latest one. So it's generally better to use this. It gives a better representation of how the eye distinguishes between different colors. So basically in this diagram, if you mix, it's geometric. So if you have three colors represented by this triangle, anything that's within the triangle, any colors within the triangle can be recreated by mixing those colors in the right proportion. You can use more colors, if you have four or five, then you get a polygon, but generally, we'll have a polygon, but generally, we'll just deal with triangles here. So what I want to show in the next few minutes is, firstly, it is not necessary to cover the large area of this CIE diagram. It's generally a lot of papers you, people try and get right to the edges, and it's assumed that we have to cover as much as possible because every color, we want to show every color. I'll then show you why it's actually there are reasons not to cover the full area. Then I will show you or argue that choosing the right way in this holography has absolutely nothing to do with holography. It's color science. And the problem was solved in 1971 by a guy called William Thornton at Westinghouse Labs. So there's the triangle, every color that the eye can distinguish. And say we take three wavelengths from the argon and the helium neon 633. We draw the triangle. You can see it's only covering, it looks like, half of that, the patch. So it doesn't look good because we can't recreate all these colors which are below 476. So people have been trying to get as big as possible, the bigger triangle or polygon. However, let's look at what colors, what the colors are. The thing to remember is these colors, all the colors you see, these are colors that occur in nature. These are colors that the eye can see. A lot of them can only be created by mixing pure colors. So colors of, say, this piece of wood or a red flower or the bluest thing that you find in nature, actually, they are quite close to the center of this diagram. So what you can see is the green foliage, Caucasian skin, blue sky. These have been plotted. B, G, Y, R, these are the colors from the Macbeth color checker chart, which is a standard for colorimetry. They're quite saturated. But again, these fall right within the triangle. So we don't have to try further. And in fact, people have looked at all the colors. There's a guy called Winteringham, and then those followed by Pointer. They looked at, they took everything that they could find in including the reddest plastics and the deepest colors, and they measured the color plotted. And actually, this irregular patch, you can see, nothing, there's nothing exist that's beyond these colors. So everything is there anyway. We really don't have to try to make this triangle much bigger. So you might say, well, come on. All right, well, let's choose a big triangle anyway, because then we are safe. Well, here's the reason why not. Luminous efficiency. The eye is visible. The eye is not visible. The eye is sensitive to colors from around 400 to 700 nanometers. Infrared and ultraviolet outside of those ranges. The sensitivity is not the same. It becomes more sensitive around, say, 550 and less around 700. So for the same amount of energy, a wavelength of, say, 500 nanometers looks much brighter than, say, 700 nanometers. So what I've done now is let's take that chart we saw and let's plot that luminous efficiency along the curve. So what you can see at 450, let's say one watt of energy will give you only, it won't look that bright. 540 will look brighter. And then it goes down again. So if you have a triangle, supposing you find wavelengths right at the corners, in fact, in the deep blue and deep blue, they become so insensitive that you won't see them anyway. So it's not as easy as saying, let's get the biggest triangle. So really, I think that the CIE chart is not that useful in choosing the wavelengths for holography. Let's look at how we see color. So let's forget that for a moment. When we look at an object, there has to be light, there has to be an object, and there has to be an eye. So we're illuminating the object, and we look at the light that's reflected. The light has something called the spectral power distribution. So each wavelength has a certain amount of energy. The object has reflectivity from 400 to 700 nanometers. So the light is reflecting, and at each wavelength of the spectrum, you get another curve. That's what you see. And somehow, never mind, it's very complex, but the eye and brain will say this is a certain color. So the important thing is you are looking at the full, in that middle spectrum, that's the reflectivity. You're testing that spectrum from 400 to 700. Now, when we come to holography using, say, let's say, 3, 4, let's say 3, in this case, 3 wavelengths, what's happening is you are only testing that spectrum in the middle, you see. You've got three points within this whole spectrum. And what you are hoping is that that light, the amount of reflectivity, automatically mixes such that you still see a lemon as yellow and a lime as lime and an orange as orange. It seems a very difficult task, because you've only got of all that data. You've only got that three bits of data, that's all. So the problem is that if you switch the lights out, have three lasers falling on some objects, can you find three wavelengths that show you the object just as if in real life? It's quite a complex problem. Color science is very complex, at least for me. I've been to color conferences. I've tried to work out. It's one of those things. If you read about color for, say, a month, you think you know everything. Another two months, you think 10, 20 years, you really haven't scratched the surface. So it is very complex. It's psychology. It's chemistry. It's biology. It's physics. So let's see if someone else has looked at this problem, from the illumination point of view. As I say, William Thornton in 71, nothing to do with holography, he simply wanted to say, right, what is the best combination of wavelengths in a white light source for everyday use, supermarket, home, without looking at whether it was discrete or continuous. We looked at the color rendering index as a measure of how good a color source is, how close to a standard white it shows you. And he didn't mind if it was discrete, if it was three, four wavelengths, 10 wavelengths. Anyway, he did a full analysis. And he found that actually there are three wavelengths, that three parts of the spectrum, which are far more important than other parts. And that was 450, 540, and 610. He called these the prime wavelengths. And actually there are wavelengths in between which reduce the color rendering index. So if you have these three wavelengths, and you add 480, it in fact, for whatever reason, which is not important here, but something to do with the dyes in the eye, it actually makes the color rendering index worse. So what he found was the best light for even everyday, for reading, for whatever, the best colors were these three colors, nothing else. So we are very lucky, I feel, that with these three colors. These are the three colors I've put on the chart, 455. Again, it looks like it's a tiny portion of the triangle. But you see that bit in the middle, that's where all the colors are. There aren't any other colors. And if you're very unlikely to have colors right on that edge, and if you do, yes, they're missing the triangle, but they'll be just a little bit less saturated anyway. So it doesn't matter. So yeah, I feel that these three wavelengths, which I looked at a long, long time ago, and I've looked at it again, I think that's the answer. I think there's no, really, we don't have to look any further. We should find lasers, three lasers, as close as we can to these three, and make color holograms. Thank you. Thank you for the very interesting approach to this. And you are, of course, completely right that this has nothing to do with holography. But we have, of course, to consider our wavelengths are extremely narrow band compared to Tonton, because he was thinking of more broad band mixing. And as Percy and Heselink have shown, that there are combinations of these three fundamental colors that for a given object or two given objects give exactly the same response in reflectivity. For example, a completely gray object or color object, so there is an under sampling when you only have three very narrow band wavelengths. I just checked here with our findings, and it's very interesting that the computer program also came up more or less with this. This was 610, 545, and 466 in our computer simulation. So Tonton is correct. But the question is, three may not be sufficient if you want to reproduce an oil painting, for example. Anyhow, that's what's my problem. Thank you. Thank you for this talk. It's wonderful. Can you give me an explanation? You mean that the printing industry, which start now to go from C3 to C4 or 5 to make art print or something, goes the wrong way, that it's getting more dull in the color? Your idea, your talk. How you apply it to the printing industry? Well, printing is completely different in that it's subtractive. So they're completely different. We are talking about additive color. So subtractive color is a completely different game. Just as a curiosity, this article came out in Discover Magazine this month. We're trichromats having three cones, but about 12% of the female population has four cones. And they live among us and don't know that they're seeing more than us. We're seeing about a million colors, and they're seeing 100 million colors. They know they're seeing more than us. They're lucky. They know they're seeing more than us. This is a little far afield. But have you discovered, as part of this research, any evolutionary reasons why our sensitivity is for those different wavelengths? I haven't, as I said, color science is extremely complex. People are looking at different parts. I've scratched the surface. I have talked to people. I've gone to conferences and talked to people. I don't know is the answer. There are people are talking about why they're still debating after all these years. They're still debating why these prime colors are better. For me, it doesn't matter. All I know is that it's empirical results, and these are the only wavelengths we need. On some high-end modern televisions, they've thrown in a fourth color, a yellow, to increase the vibrance of that particular band. How does that work with the Thornton numbers? I don't know is the answer. The thing is, I don't know. I don't know. That's the honest answer, and honesty always wins. Yeah. Thank you.
One of the holy grails in display holography is the production of natural color holographic images. Various sets of wavelengths for recording have been suggested, some favoring three wavelengths, some four, and even more. I will argue that the choice of recording wavelengths is completely independent of the holographic process; in fact was solved once and for all by scientists working in general lighting in the 1970s. I will suggest an ideal set of wavelengths which will produce color rendition equal to better than conventional photographic processes.
10.5446/21017 (DOI)
Okay, sorry about this. Normally Odeel Melia should make the talk, but she fall down and so she was insured and I will do the talk instead. First of all I have to say I'm very happy and very proud to be invited and I'm as well very happy that this kind of program start to continue. Let's see, do you see anything? Yeah, it's a, let's hope it does, that everything else works now. Melia Öhlmann is a French American collector and socialist, sociologist. She started doing research on the new form of artistic expression in Paris. Sorry about this, I'm a bit nervous. She created and managed a museum of holography of arts, science and technology institute in Asti in Washington DC during ten years. Her research exhibition launched digital holography with Sinfordie in Germany. Now in Strasbourg at the French-German border she conducts research in holography while providing exhibition and trainings. Here you see how she is acting in 3D in Belgium. What does it mean to learn with holographic visual? Experience movement. The time is moving and you are moving. You are interacting with the image. You have access to one image or many images or one image with multiple facets, the whole is the one. See with full body. By moving you see not just with your eyes but by experience a space with your body. There's a full coordination between the outside and the inside including all sensory organs used to move the brain to coordinate the whole. Time-space relationship continuum. Moving around the image takes your time. This allows you to experience a time-space relationship, the continuum that is related even to you. Multiple realities. Have access to multiple information show the reality of a single object is never the same. The light, the reflection, the position, something will always change. All is a composium of the many of changes with the attributes you will choose. Image of information, one sees something, have access to something, the other to something else. Altogether you can see much more. You remember maybe in Lake Forest we had one person coming up and one audience was describing what they see, the other one described something else because they saw the back of her. Visual sensation and perceptive representation results from the correlation between the information stored as invariant and the new visual information. Visual perception is nothing else than a sensory message resulting from the first stimulation of some cells in our retina through the transportation and processing of the signal into the brain. There is some experimentation where information of sound, touch and visual are experienced in different time frame or reversed or displayed during producing some discomfort of dizziness. This would support the existence of a certain order or organization of harmonizing the whole. The content, a composition of many parts which are proposed in many 3D films, animation and visualization seem to be simplified and make the viewer smiling. And the holographic visuals with the whole interactive perception information is closer to the memorized information about reality except for matter. This creates a gap in our preconceived representation of reality and allows new possibility of experimentation to be accessed. It develops capacity to think 3D. Experience an image. When you want to describe it, you get problems. People do not follow but look at children. Children capture the image much faster. Children are still learning with experimentation and especially with movement. For them, life. If Adyuals hesitate to be recorded, have problems in front of a camera, look at this boy, Philippe 9 years old. He wanted to be holographed and prepared himself alone. The digitized hologram have been booked at the museum in Braunschweig. Actually this one is the best-sold hologram at all. He's sold more than me. Here is the presentation by Naktur. Oh no, I'm already fervor. Here is the exhibition in Niederborn, Le Bar near Strasbourg. Look at how the children are moved in front of the infogram of a football player. We use holograms to train engineers with complex machines but we use it also to train, re-educate a movement also on at your own rhythm. This is the nice thing. Something about nature. Just to explain the photograph. He was recording every day at the same time the wood and you see it through the wheelie here. This is the exhibition of holography at the Feno in Wolfsburg. There you see though, this is a group of actresses which worked on it. I hope for tomorrow we can change this a bit. Some 3D images are still unsatisfying to our perception. Two or three or nine images what you use for, for example, 3D monitor. For our visual organs is a reduction more than really enhancing the visual. Except that it brings up the volume to our awareness. According to the subject, it can be useful and each technique can be used for a specific beta-gogic tool. 3D in general is very well accepted in all what is imaginary, like avatar and comic games. As it presented a non-real so-called virtual world which do not enter in conflict with the two-dimensional representation of reality, we have learned and defined since 2000 years. Other media can support the training of our folks in 3D than some 3D images because they reproduce in the structure of holographic model. The best example is the web. You see again a nine-year-old child which used a magazine, took out a picture, put itself in and created a scene program. This one is red and white but it works quite nice. New policy, new educational program but also beta-gogic material are necessary. Still, the first who teach, who pass on the information to our children are teachers. So they also need new training with other beta-gogic methods and structure. When we show, when we see how many children have problems to learn at school but learn better outside traditional school system, we should react really act fast. Finally, we need to see up pluridisciplinary research on how to teach 3D thinking, not just hologram, all kind of 3D visualization. And what is the discussion about in forum about holography and that holographic projection is just 2D and not holograms. We need first to learn to see the difference between stereoscopic images. So, if you're thinking 3D, we can imagine a different way of treating the visual effects of the digitalization. The problem to think 3D lay in the education and habits of thinking. Education in 3D and with 3D is a big responsibility. We are coming from a two-dimensional world of service related to a universe framed by the scientific determinism of Cartesian dualism, mind to encounter free dimensional representation of neurality. It's supposed a radial change in the way we conceive education and need to have trained teachers. This can be done only with new educational policy and new search for implication of the world passing from a new world. No, does it work again? I think this is finished. Thank you. It's sorry about a bit. We have a bit of problem with this representation, but I quite think that new education, three dimensional, this is what I was always putting in the discussion should be not just holography. We need to think 3D to imagine the difference between media of 3D and for proper classification. Thank you. Questions for the speaker? How early do you think you should start with children? How young? I mean, we made the experience, I hear from some holographs like Rob Monday and myself, you can teach them for five years how to do a Denizogram. I mean, this is what TG said. If you make a simple thing, I mean, my son had used a Lego set up, played to make his Lego figures, put it in the camera and make a hologram. It's not something complicated to do. It had a very strange reaction. When we went to the holography in Hungary, we stopped by in a restaurant. And then we was eating and he went through the whole room there, climbed on a chair and touched a painting. It was not a painting, it was a tapestry. He was saying, look, this looks 3D. The next thing, it was in a tape gallery. He was doing this. Five years old. The people said, they all stopped by and said, why a little child loves art so much? Then he turned around. Papa, why does it not move? So you see, children, yes, very intuitive and they react and if you give them the possibility of experience, you should have a hologram for each school because then all your problem you have in your things would not exist because they can distinct between media. Thank you.
Odile Meulien was born in France, after studying sociology and doing research on the new form of artistic expression with the art Historian René Huyghe at IPPAC in Paris, she created and managed the Museum of Holography of the Art, Science and Technology Institute, ASTI in Washington DC. She published many papers on the Holographic arts and perception. She collaborated at the Launch of Digital Holography as a CEO of Syn4D in Germany, and now conducts her doctoral research in anthropology and Holography at the University of Strasbourg. Contact: om@artbridge.info – www.artbridge.info Today the industry offers a chain of 3D products. Learning to “read” and to “create in 3D” becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present the different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. . Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time.
10.5446/21018 (DOI)
Thank you Seth and greetings to all. Thank you for inviting me here today. My paper, as Seth said, is titled A Curious Conundrum, the State of Holographic Portraiture in the 21st Century. I'm not sure if this should have been in the art section or the commercial section. It's definitely about making money and it's also art to me though. So, for the last 14 years or so, I've had the distinct pleasure and honor of being a pulse teleographer specializing in pulse portraiture. So, the practice of representing the physical and psychological likeness of an individual is as old as art itself. The principal methods of portrait making, that is painting, sculpture, and photography, have each been used successfully to immortalize one's essence of being. The mystical, magical, and spiritual nature of the portrait has long been associated and accepted within each of these methods. A relatively newcomer, holography, and its unique ability to accurately record and reconstruct original wave fronts particularly lends itself to this form of artistic endeavor. It is exactly this capability of capturing the wholeness of an individual's likeness that makes holographic portraiture so alluring and potentially attractive. So, here I want to define what a holographic portrait is. I am defining a hologram portrait as any type of portrait that uses science and wave front recording and reconstruction classically known as holography. Either partially or wholly in its method of producing a three dimensional image of an individual's likeness. For the bulk of my presentation, I will be discussing what I consider to be true hologram portraits. And that would be holograms that are made with the original image capture and subsequent reflection copy being both generated by holographic means, that is, pulse lasers. I do want to make the distinction of what I feel is, you know, a misnomer in the fact that the recent publicity and of like the Tupac hologram and CNN and those types of holograms, you know, that's classically misuse of the terminology. So, we have what I consider to be the three major classifications of hologram portraits. We have the classic pulsed hologram. We also have the stereogram where it's a photographic image capture and then subsequently transferred into a hologram, most usually as a rainbow transmission hologram. And then we have what is kind of a hybrid of that and that is the full color, newer full color digital holographic technology, the direct right full color reflection holograms that are produced. Okay, so this short history, I'm sure everyone's familiar with it in this room. You know, 1967. Lawrence Siebert did the first pulse hologram portrait. It was a self portrait. It was done on Halloween night in 1967. I think I might have chose to conduct the experiment a day later. I mean, you know, just a bad night for it to be started in 1971. There was a pulse hologram portrait that was completed of Dennis Gabor by Reinhart while he was working at McDonald Douglas. And that was to commemorate the Nobel Prize for the discovery of holography. I believe that is residing here in the MIT Museum. I have a partial list of people that I consider significant in the advancement of holographic portraiture. This is a partial list by no means complete and accurate. Likewise, here's a partial list of noteworthy holograms that have been produced in the past. You'll notice that there's some quite famous people on there. I wish that there were more. And like I said, it's a partial list, but there's been no real advancement in holographic portraiture of doing figureheads and heads of state in pulse holography for some time. The example right here of Ronald Reagan, he is the only U.S. president that have ever had a pulse hologram recorded. He was done by Hans and his colleagues. And I find it a shame that every president is not recorded in pulse holography as an archival method for historical reference. It's the most accurate form of recording known to man, and I feel that we're underutilizing this medium, important. I think of people in our society today that I would personally want to see pulse hologram portraits of. I think of people that have made significant contributions to the betterment of society, like for instance the Dalai Lama or Nelson Mandela or Stephen Hawking. Can you imagine a pulse hologram portrait of Stephen Hawking? How powerful that would be to be able to see him in a personal way like that. It can conveys an emotional state and essence that would be truly remarkable to see in these type of individuals. Stephen Jobs, why wouldn't we have wanted to have record him in a pulse hologram for his offices in Apple? Think of how he was so instrumental in that company and how his pulse portrait could have been displayed and still been a source of inspiration for his people that work there. I'm going to talk a little bit very briefly about the basically two different types of lasers for pulse holography. The first laser is the pulse ruby laser, of course. It was the first laser invented. There are many laser systems like this made by JK and Mnemonics that are still readily available today. So it's a usable laser. One of the problems that I see associated with pulse ruby lasers is the 694 nanometer wavelength output. It tends to not reflect well off of human skin and that can be a problem. You need larger doses of radiation to image well. So I see that as a major obstacle in a pulse ruby laser. Additionally, they're notoriously high maintenance and difficult to keep aligned properly. They also generally use three-power phase electricity, which is a fairly inefficient laser, so it uses a lot of electrical costs for the operation. Another problem is the problem with beam cleaning in a pulse ruby. So classically, pulse ruby systems have had to have a second continuous wave laser for their transfer setups. That is an additional cost and space needed. So that's a problem I see with a pulse ruby as well. So my career in pulse telegraphy has been almost entirely using the newer pulse neodymium YAG YELF systems. Specifically, I'm used to using the pulse neodymium YELF hybrid system, glass phosphate systems. These lasers are commercially available and there's cameras, systems, and the lasers are available from GEOLA. They're currently being manufactured today. They are frequency-doubled lasers so that they have a 526 nanometer output. That is a much better light for reflecting off a human scan. It's also a very low-maintenance laser. I can attest to having been involved with three different laser systems of this design that they are excellent in the ability to be stable, reliable, systems that allow you to concentrate on the imagery and not on the laser mechanics to make successful portraits. They have a very low, they're a very efficient laser. They have low operating costs and a very small compact footprint. Another major advantage I see to the pulse neodymium YELF systems are you are able to use the same laser system for the recording of your H2 transfers. You don't need a separate continuous wave system. They work fine with beam cleaning, spatial cleaning, and they produce excellent transfers, the same system. They're also much more readily available, green-sensitive, holographic materials on the market currently. You have a broader range of materials to choose from for recording. The materials that are available currently, that are commercially made, we still have the Slovage VRPM film and glass plate. They are in production. Those are also available by GEOLA. We have that as a green pulse-sensitive material. Yves Gentit in France makes a ultimate material that in the 15 nanometer or the 25 nanometer green size is suitable for pulse radiation. And that's available in both red and green materials. Orwell, I believe, still believes, makes a film in a green pulse-sensitive material and color holographics still makes their BB520. There's other manufacturers that are making materials sensitive to green pulse, but unfortunately they're not in the sizes required to do portraiture. Ilford, Agfa, and Fuji holographic materials, which have classically been used in the past for pulse telegraphy, are no longer commercially available. And there are still, however, some of these materials on the surplus market. Here's a little chart that I made of my 11 years in pulse, 12 years in pulse telegraphy between two studios. One in Nashville, Indiana, and the other in St. Charles, Missouri. It shows a breakdown of, we did just over 200, between 200 and 250 portraits during this time frame. This shows the commission, the people that actually bought a portrait and what they bought a portrait of. As you can see, overwhelmingly, the majority of holographic portraits that we have done to date, 65-some percent, are of children, either children, their children, or of grandchildren. The breakdown shows a couple of other interesting things, like, for instance, dogs. People love their dogs, and so 4% of all the hologram portraits we did were of dogs. We did a number of couples. This is an interesting thing. We even specially marketed for trying to do wedding portraits. And in this whole time frame, although we did do some holograms of couples for anniversaries and things like that, there was not one wedding commission during this whole time. So that's an interesting fact, I think. Another thing that's important to note is that of all the imaging we did, I mean, generally, because of the ultra-realistic and high-resolution nature of the recording, we didn't do many women over the age of 30. They just didn't want to have their portraits done. So it's important to think that way, and if you decide that you're wanting to have a holographic image done, there's certain periods of one's life that are predisposed to being a perfect time for that to happen. There's another little chart that I made that shows what I consider to be some of the marketing factors that determine the different portraiture mediums. We have the three classic sculpture, painting, and photograph, and then I've listed some of the ways that those, the factors that are determining whether or not that's a viable or aesthetically pleasing form of imaging. We have, it's interesting here in this to see that in the hologram, and I'm speaking of a pulsed hologram here, the realism and the uniqueness are both in a high category. The cost is also high, of course, but the versatility being low, I consider to be actually an advantage. By unique versatility, I mean the ability to alter the image, to do different things with it, that can be actually seen as a positive in the unretouchable nature of a hologram. Let's talk about aesthetics in pulsed holograms portraits. Let's be frank, it's not hard to make a scary hologram. In fact, I have been commissioned to do specifically scary holograms for horror movie and things like that. What I feel are some of the reasons why we need to work on aesthetics in pulsed portraiture is the fact that the public's perceptions are fickle. We have a generational change about to take place where we have the opportunity to present pulsed holograms to basically a new market, and they may be more accepting of that. Classically, in the last 14 years, are we down to this already? Really? We have to speed things up. What I consider the rules of doing successful pulsed holograms portraits is the same as photography. I started in my background in photography. Creating a aesthetically pleasing hologram means you want to be a good photographer, you take a lot of photographs, it's the ability of the photographer to put people at ease, and actually that is a difficult situation in a pulsed holography lab with the lighting and the safe light conditions, particularly for children. Another thing is the cost of the material. You have to weigh the cost of the material versus the number of master transmission holograms or proofs that you do. Three to five holograms proofs is a minimum to successfully get a suitable portrait for transfer copies. Let me speed things up here. Here's a list of what I consider some public perceptions on holograms portraits. Again, this is pointing out that we live within a fast moving and highly evolving technological society. I think that the opportunity is available for a new generation of people to see pulsed portraiture and be able to experience it and maybe have a little more conditioning to the response. The ultra realistic, the realism of the medium is maybe perhaps more inviting and more acceptable for them. I know it is for me. I embrace holographic portraiture. I'm not afraid of it. This shows basically my company's history. We were founded in 1992. We gained our first pulse telegraphy studio from a geolus system in 1999. In 2003, we installed another pulse telegraphy system for Alan Fox in St. Charles, Missouri. In 2008, my pulse telegraphy studio closed due to economic conditions. That same year, I was contracted by Ron Olson and a company in Las Vegas, Nevada to run a pulse telegraphy studio there. That successfully worked for approximately six to eight months and then we were forced to close. Am I this far over? Currently, to my knowledge, in the U.S., there are only two commercial functioning pulse studios in operation. The one in St. Charles, Missouri and the one run by Ron and Bernadette Olson in what is now Pulse Barrel, Washington. I was going to talk a little bit about my personal emotional unique response. It's not a shameless plug to show all my children here, but I have two children and I have had the unique opportunity to be able to record them during their lifetime, their childhood. I have this physical record that I think is unique not only within the general public, but even amongst my peers. The ability to have these perfect recordings of my children to be able to look back on as I get older and age is just absolutely priceless to me. My daughter is now expecting a child that will be born in November of this year. I look forward to being able to make a pulse hologram of her child, but also think of what her child will have when she grows up and sees that she has these pictures of her mother through childhood in this perfect form of imaging. It will be an absolutely priceless thing for her. Quickly, I want to talk about digital holographic portraiture. I'll just go through this. Okay, so in conclusion, the true holographic method is the most accurate and realistic form of imaging known today. In pulse portraiture applications, the three-dimensional nature, extreme resolution, and realism of the portraits themselves, combined with their unalterable and archival properties, make it a powerful method for constructing an individual's likeness. The technology exists and yet is grossly underutilized presently in today's society. With continued dedication and education and a little patience, I believe the time may be nearing where the public's awareness, perceptions, and appreciation of a true holographic likeness coincide to produce a watershed event in which holographic portraiture becomes a commercially sustainable industry. The view of myself and a slow but growing number of others that this is simply a matter of when and not if this will occur. I want to also thank and acknowledge the following people because they actually were instrumental in helping me with my dream of being able to be a pulse teleographer, so I wanted to make note of that. Thank you.
The technology of producing (true) hologram portraits was first introduced in the late 1960’s. From this time, a number of individuals and organizations worldwide have specialized in providing holographic portraiture services with varying degrees of achievement. Yet today, some 45 years later, holographic portraiture remains an obscure and niche form of displaying an individual’s likeness. Despite all of this technology’s promising and unique attributes, and the astonishing fact of holography being the most accurate and realistic form of imaging available today; true holographic portraiture continues to be a form of portraiture largely unknown to the general public and has never achieved large-scale commercial success. This paper will present a brief history of holographic portraiture, designating the different types of 3-D hologram portraits available today, and their uses. Emphasis will be given to true holographic pulsed portraiture in which the subject itself is recorded holographically using high-energy pulsed lasers. Possible cause and effect for explaining the present demise of this type of portrait making will be discussed along with recent advancements and future developments in this fledgling field which could ultimately lead to a “tipping point” in large-scale consumer and commercial awareness and desirability of the medium. The author will share his experiences in operating pulsed holographic portraiture studios for over the last 15 years including the vision of a new type of holographic portrait studio for the 21st century which he hopes will attain the level of success enabling a next generation of commercially viable holographic portrait studios for the future.
10.5446/18581 (DOI)
Sod ddim gyn instruments steakblesyr pentえ ahi am gwahanol i eibwmweith sheriff Cymru'r neur Fy celebrity combine yn gynllun nyfael...en dwi i fan nhw bod y rwrwch i gwelio ka prodd Mae nhw o'r arfer. Ermai o'r cy Replenwyr Oh- theriam, ar dda bod yn bêr hynce o'r hwnnw me'n ll Elsa- Mhysud meddwl hefyd mynd i o'r llabor! Ac mae nid ti'r draodraeth yn yma hefyd yng Ng 토fer. Doedd ein himnaeth yma mor gyflosionidd, roedd aror 10. Upinlled yn 5-dym gallai ohod F fountainxirth Aywydd 50 Sen o resid<|pl|><|transcribe|> Nfe'r Muetic o'r Ldf yn Ymbys Jam<|pl|><|transcribe|>ύmadge, dwad amateur Wylds. Siar규vedra panibl dubynol? Ond mae'n 800 ysgol. Mae'n gweld yn ystodolig yn ystodolig. Mae'n gweld yn ystodolig yn 40 yma sy'n gyflawni'n gwybod. Mae'r ffyrst leftyr yn ystodolig yn ystodolig yn ystodolig. Mae'n ystodolig yn ystodolig yn ystodolig. Ond mae'n gweithio'n gwybod yna. Mae'r gweithi gwahanol sy'n gyflawni'n gweithiool i osgol fiscalol. method newid o elu prowl ar gyfer ynddo. Boss ddim poln yn armoed yma i ddim fid yw'r cytha Zeiten sydd ginnu teulu. Fodion ryw fwy stor mos a,olDO!] Mae'r dreif alw'i h Mis tîm, os roedodau yndig o dd refleid yno'r cam... 채 prick campaign? Carlyte, gallwy Pole'r tref looked at which says, pen yma'r colau yma, ar weld os dim hyd ynzhir heardd yn dweud? Alwerd wedi mergef yr mygarfyn sydd yno yma hy cropot yn dinwy, ac dyna behavi o'r remersion a yn ddy視io yna. So'r newid egy hynny mae'n Thanjmian N nước wrth Ynan Cardewyr.EL, rym ni'n wedi dar hyvä adnod'r til. sy'n gweld eich nadshunсаeth i thosri fydd productTH a'r gwbl rydw i distwy i'w grŷl complexu mewn un simple sydd y Gweofwm Yn Majesty Majesty neu ddal un worryingu�� dzieci ac sydd yn llum indicate pot cynfl captain ac hynny, gyfan fyddeithas reapp hi haedd roi gyn inceptionを mae'n gweithio Dimhwy dylio Phothyungno.....goedd i'r hirdd o sóloce, geitio yr hyn o'r rewards. Rwyf those rwyf allan am gyde complic os Chris, felly roedd os iddyn nhw'n credu ar-gynch in涷. ShitigavadGER efallai ei ynчто. Parwyr i'rетаeth cydnought, mae'r beth am gollett yr freedoma 그럼ought oherwydd i ddopathойcai i'r Ysla いen Samh maheonau, yn gweler hyn ac wedi<|sv|><|transcribe|> Wel Isaac Newton was here. Isaac Newton was the Lucasian professor of mathematics here. Science didn't really exist in Newton's day. He was professor of mathematics here and in many ways he sort of invented the science that some of us loved or hated at school. Invented the laws of motion, came up with those fundamental laws of motion. In doing so invented calculus or invented part of calculus, integral calculus. He developed the universal law of gravity, this thing that causes the moon to rotate around the earth or the earth to rotate around the sun. It was Newton that came up with this equation that explained why it should do that and how it should do that. In order to do that he constructed a new kind of telescope. He invented and made the reflecting telescope. He did a whole lot of things on light and colour and so on as well. So really one of the absolute greats of science. We're going to have dinner tonight in St John's College. Isaac Newton was the master of Trinity College, which is the college right next door to it. We'll go past that on the punts later on today. Then I talked about Maxwell. Those of you who did science at university I'm sure learned Maxwell's equations. Again you either loved them or hated them depending on how good you were at mathematics. I must confess they weren't my favourite subject. But this guy really did unite the whole field of electricity, magnetism and electromagnetic theory. Again really one of the absolute greats of all time of science. He did his work in the Cavendish laboratories, which literally are, well we'll walk through them today. I believe we're going to go and see the Maxwell Lecture Theatre where this man lectured to his students. You wouldn't want to have been there. They deliberately designed the lecture theatre to be as uncomfortable as possible so that the students didn't fall asleep. I hope you won't fall asleep today but I hope at least the lecture theatre is a little more comfortable. Then maybe my own hero, JJ Thompson. Why is he my hero? Why is he a Scotsman? I like myself like Tom. He came from Scotland. This was the guy who discovered the first subatomic particle. I guess you've all heard of electrons, even those of you that aren't scientists. It's one of the kind of real base subatomic particles taught in very early stages of science across the world. This was the first man ever to prove that there were such things as subatomic particles. To prove that every element had the same subatomic particles. That really was quite a major breakthrough. He won one of the world's first ever Nobel Prizes. You may wonder why Newton and Clark Maxwell won Nobel Prizes before Nobel. They didn't have the opportunity to win a Nobel Prize. This guy won one of the first Nobel Prizes. Indeed this university I believe is won more Nobel Prizes than any other organisation in the world. The Nobel Prize has been around for a little over 100 years. This university has won 88 Nobel Prizes or 88 Nobel Prizes winners. I admit that there is more than one Nobel Prize each year. There is something I have given to more than one person in each area. But it is still a density of Nobel Prizes unrivaled anywhere else. Another name that I am sure you may have heard of, Rutherford. Ernest Rutherford. He did with splitting the atom. It is not quite true that he split the atom. Two of his associates who also worked here, Cockroff and Walton, actually split the atom. Rutherford is the one that everybody always remembers. He was the head of the department and he was the one that got all the credit. The real father of nuclear physics. I am a fan of Cockroff particularly because he was at St John's College where I was. Rutherford is the name everybody remembers. Rutherford undoubtedly changed the world in the understanding of subatomic particles that led to the whole of nuclear science and nuclear engineering. Rutherford did his work, Cockroff Walton did their work within a few hundred metres of where you are sitting now in the Cavendish Leporos trees in Cambridge. The final one I would like to talk about is Crick and Watson. Crick and Watson discovered the structure of DNA. A more recent one. Not long actually before I came to this university. I don't know if many of you spotted, just before you turned up the steps to come up to this lecture theatre, there was a thing that in my day when I was here in Cambridge were the cycle sheds. A funny little wooden shed. It is now the Rolls Royce University Technology Centre here in the materials department. But in my day it was the bicycle sheds and I never looked twice at it. It was only about five years ago that somebody told me there were bicycle sheds. Now the Rolls Royce Cambridge University Technology Centre. That was where Crick and Watson discovered the structure of DNA. I find it staggering that to this day there is still no notice on the side of it to explain that this is where Crick and Watson did this work. This is the whole basis of modern genetics. Without understanding how DNA reproduces itself and so on, you really don't get into genome theories and genetics and that. I don't want to labour the point of how great this little area has been in the past, but it really quite dramatically has set the framework of world science. We're going to see some of that. I hope we're going to look at lots of the cycle sheds or the University Technology Centre on the way past and see Maxwell's Theatre on the way past. We're going to look this morning when we go round in R2. We're going to look at a very interesting mix of real high tech. You'll see some of the world's most advanced electron microscopes and some real history of where it all happened. Then this afternoon we'll look at some of the older history, some dating back more of the 800 years throughout the history of what's happened here in the past. Let me move on then and talk about SKF. SKF, the knowledge engineering company. What does a knowledge engineering company mean? Well, our group vision is to equip the world with SKF knowledge. I remember when I joined SKF five years ago, I remember thinking, I don't understand what that means. Does that mean we give away all our knowledge? Does that mean how enough do we maintain our position if all the time we're teaching everybody what we do? I found it really quite confusing. I found as I got into SKF and as I got underneath the skin of SKF, I really discovered what it meant. I've translated it for me and for the technical people that work for me. What does this vision really mean for us in terms of technology? What it really means is we've got to provide our customers with real value through our technical knowledge. We're probably not the lowest cost producer. You probably can buy lower cost bearing seals, whatever, from other people. It's our aim to be the best. It's our aim to build value into that product through the use of our knowledge. For me and for the technical people that work for me, that means that maintaining our technology leadership is absolutely critical. I remember when I joined the company, I joined it just a few weeks after our 100th anniversary, and I remember Tom saying to me, Alan, we've created every major breakthrough in bearings in the last 100 years. It's your job to make sure we do it for the next 100 years. I thought, thanks, Tom. But what a challenge and what an opportunity and what an exciting job to be the guy in charge of technology. I want to share a bit of that with you over the next half hour or so. Sqf, very much known as a bearings company. The world's leading manufacturer of bearings. But we see ourselves very much as having five technology platforms. They're all very related. They all really support our activity in bearings. The bearings, guys, will tell you, bearings never wear out. The real true bearings people will tell you, a bearing never fails. It's really the lubrication that fails. In a rolling element bearing, the ball or the roller rolls round against the steel ring. And if it's working properly, it should trap a thin film of oil between the ball and the ring. And the two steel surfaces should never touch. That very, very thin layer of oil or grease should separate the steel at all times. And provided the two steel surfaces never touch, they can never wear out. Not quite true, but it's almost true. You do still transmit loads, so other things can happen, but it's almost true. So the true bearing expert will tell you, bearings never fail. It's this guy that's wrong, but the lubricant fails. If you lose that lubrication film and steel touches steel, then, yeah, you start doing damage to the bearing. So through that, SKF has become really pretty expert at lubrication. And as a result of that, we've both developed and acquired companies that manufacture lubrication systems. That put the right amount of grease in the right place at the right time. Where there were leaders in the lubrication system. But very much in support of our activity in bearings. Nah, the lubricant guys will tell you, the lubrication never fails. What fails is this thing, the seal. The seal fails, the lubricant drips out and then the bearing fails. Again and over simplification, but there is some truth behind it as well. So as a result of that, we've made ourselves sealing experts. We sell seals to other people as well, but we make a lot of our own seals. Mechatronics, that's a word that means different things to different people. So for us, what it really, I think means is combining our knowledge of mechanics and of electronics. We're seen as one of the world's foremost precision mechanical engineering companies. We machine very hard steel to very high tolerances faster and better than anybody else. We're not so widely known for our electronics knowledge. I was astonished when I joined SKF to discover that SKF made 5% of the world's fly-by-wire systems. Fly-by-wire systems are the systems that control the world's latest aircraft. Either military aircraft or now civil aircraft. Very, very complicated electronic system. We make 5% of them. Gosh. Mechatronics for me means taking that electronics knowledge and adding it to our mechanical engineering knowledge. You'll see from some of the examples I give later how we're beginning to put that together in some really clever and some really pretty sophisticated ways. But in ways that really could change the way we think about bearings and how they operate. And then finally services. We provide this clever mechatronics often in the form of condition monitoring. Of telling you what's happening within the bearing or within the mechanical system. And once you're able to measure that, well, it makes a lot of sense to offer that as a service to the customer. The customer often doesn't want to do that themselves. They often just want to know when things are going wrong. Don't bother telling me when everything's perfect. Just give me a call when it starts going wrong so I can do something about it. And we provide that as a service. So SKF is active and developing technology not just in bearings but in all of those areas. It's really all about managing and reducing friction for the customer. So a few words about our research and development. First of all, what do we spend? Well, we're spending at a never increasing rate. When Tom told me he wanted us to stay at the forefront for the next 100 years, I said, that's great Tom. Your side of the bargain is you've got to back me and give me the money to spend. And he promised he would. And dammit he has. And I have never worked for a chief executive before that is so committed to the future and so committed to technology. Life as the head of technology is an absolute dream when you've got a boss that's willing to back your ideas. It's not so much fun if he's not. But Tom, I have to say, has backed everything I've put in front of him. As you can see, I've had pretty significant increases in budget every year. The one exception, of course, 2009 when I guess we all had our share of problems. At one point, this case manufacturing was down by over 25%. Our sales were down nearly as much. And Tom's instruction to me, don't get rid of a single engineer or scientist. I haven't worked for a company before that would have said that to me in such a danger. So we have a very strong commitment to technology. What are we spending this money on? Well, this is actually a slide that Tom showed our investors recently when we announced our last set of results. So these are actually Tom's priorities rather than mine. Obviously he talked to me before he did it, but these are Tom's priorities. Environment, we are very driven by environmental issues, environmental concerns, environmental opportunities. We very much see the environment as an opportunity rather than a problem. We have a big contribution that we can make in order to help our customers, reduce friction, reduce their energy bills, reduce CO2. We are very focused as well on doing that ourselves within our own operation. We're focusing on core technologies. I'll come back to those later. Focusing on some new products again. I'll come back to that later. Strengthening our R&D in fast growing regions. We are setting up large new research laboratories at the moment in both India and China. This last one, and this is Tom's slide, remember, for the investment community. Strengthening our links with universities. I guess that's one of the reasons we're here today. So where are our major facilities around the world? I could have put more of the yellow dots, this product development, than an even wider range, but these are the principle. Our major research facilities in Sweden manufacturing development in Holland, technology development, new laboratories in India and China, and a technical centre in North America. And then the green ones show the university technology centres and we are here today. This one here is Cambridge. There's one also in London, one in Gothenburg, one in Luleå and one just outside Beijing in China. So those are the principle places where we do technology around the world. This is our Dutch facility. We've been in this facility for about 40 years. Everybody always asks, why is Swedish company with lots of international activities? Why do we have our research centre in Holland? Well, it was set up there before I ever joined the company, so it's difficult for me to know the real truth behind this. The story that goes around is that the Swedes wanted to have it in Sweden. The Germans wanted to have it in Germany. SKF France wanted to have it in France. SKF Italy wanted it in Italy and so on and so on. And the chief executive said, where don't we have anything? And we didn't at the time have anything in Holland. And so put our research centre in Holland. I don't know how true that is, but it actually gives us some really big advantages. It's seen as completely neutral from within the rest of the company. It's an easy place to get to. It's quite close to Skipall Airport in Amsterdam. It's quite easy to fly into from anywhere in the world. And we have a very, very international flavour about this research centre. We have over 25 nationalities working together in this research centre. Normally within a research centre it's very difficult to get the different departments to talk to each other. So if you have a metals department and a computer department and a plastics department and a measurement department or whatever, they all normally sit at lunch in their own departments and talk to themselves. Not here. Here all the Italians gather together at a table and all the French gather together at a table and the Germans gather together at a table and we get this networking happening automatically. And it's a real plus. It's a real advantage. I have never seen a research centre so well-networked as this research centre in Holland. This is our research centre in China or will be our research centre in China. It's still a computer-generated graphic at the moment. We're currently employing 70 people and we're renting a building at the moment while we build this one. After the land now we start the building work in September. I said it's 70 right now. It will be 400 people by 2015. That's the plan. India, we really have the building. The building was opened by Tom in December last year. Again similar ambitions, 400 by 2015. We have 140 there today. It's a little bit ahead of the Chinese one. We started it about a year earlier. We have two big new major technology facilities in the growing and rapidly developing regions of the world for us. A very strong commitment to work with customers in those areas. A very strong commitment to develop and expand our technology activities in those regions of the world. So let me talk about what are the core technologies. What are the things that really matter to us and make us the companies that we are today? Well, every time I list our real needs, our real requirements, I come back to steel. SKF used to own its own steel company, Ovarco. We sold Ovarco about eight years ago now. It was the right thing to do strategically. It was very difficult to work with any other steel company when you own your own steel company. Nobody else would do development with you. So, for good strategic reasons, we decided to sell Ovarco. It's been a very clever commercial decision. From a technology point of view, it's enabled us to work with other steel companies, but we did lose a lot of our steel technology and steel knowledge within SKF. One of my core goals, key goals the day I joined this company, was to rebuild our steel knowledge right back up there at the top of understanding bearing steels in particular. That's very much where this university technology centre that we're visiting this morning plays a critical part in our steel technology. We've also increased the number of people at our research centres, both in Holland and in manufacturing development in Sweden with steel technology. So, heat treatment is the way that you get the best out of steel. Steel is an interesting material and I'm going to let Harry, our professor here, talk to you more about that later. But what you do to steel after you've formed it, how you heat it, cool it, bash it around, makes an enormous difference to its properties. So, that's what I mean by heat treatment. So, steel is absolutely critical to us. When I was sitting where you are now, 39, 38 years ago, learning metallurgy at this great university, I was told that the maximum strength you could get out of steel was 2 gigapascals. Don't worry about the units, it was 2. And I was told that was closer to the theoretical strength of the material than any other material. The theoretical strength of steel should be about 20, but you can never get that for a real practical reason. But 2 is closer than you could get to the theoretical strength than any other material. Sqf2 is nothing for what we put into bearings today. Typically it's 4 if it's a highly stressed bearing. I've seen numbers of 8. I wouldn't have believed that. My lecturers wouldn't have believed that when I was sitting here 40 years ago. So steel and the way we handle steel and the way we treat steel is critical to Sqf. Other materials are important as well. The one I'm flagging up or highlighting here is ceramic material. For our very highest loaded bearings, we actually change the rolling elements from being steel to being a very hard ceramic material called silicon nitride. Silicon nitride does actually outperform steel, and especially silicon nitride rubbing against steel rings on the inner surface here performs extremely well. So our very highest loaded bearings are actually in silicon nitride in a ceramic material. So material science generally is critical to us. Sensorisation. This mechatronic spit. This adding functionality to our bearings. This shows a bearing that has a sensor in it that detects the position that the bearing has stopped in. Why is that important? Well this one is actually for Bosch. It is for what is called a starter alternator system for a car. In order to save or to increase fuel economy of cars, the latest generation of cars, when you stop the car, the engine stops. And then when you put your foot in the accelerator again, the engine automatically starts again and you drive forward. But it saves you using any fuel for that time when you stopped. And that saves obviously emissions and it saves fuel. But in order to get the car to start really smoothly, you actually fire it up on its starter motor first and start it moving forward on its starter motor before you switch on the spark plugs to fire the gasoline engine, the petrol engine that then takes you forward. And in order not to get a jolt as it transforms from one to the other, you need to know exactly where the car has stopped, what position the engine stopped in. And that's what this device does. SKF knowledge, enhancing the ability to create a new opportunity for fuel economy. Tribology, I hate this word. Tribology, what does it mean? It means understanding surfaces in a lot of detail. And I hate this slide as well. This slide looks like it's a really rough surface. It's a really rough surface until and unless you understand the units on this axis, which basically says there's about 50 to 60 atoms between the bottom of the surface and the top of the surface. So we're looking at this surface in a lot of detail, really, at the point I'm trying to make. And understanding surfaces in a lot of detail is really what's critical to the performance of a rolling element bearing. Modelling and simulation, we are by far the most advanced company in the world at understanding what goes on within a bearing and indeed in modelling the situation that's happening around our bearings so that we can understand the environment of the bearing and provide the right bearing to do the right job in the right application. Lubrication, I talked before about the stability that you needed to have this very, very thin film of oil between the metallic elements, otherwise the bearing wouldn't last very long. Understanding what controls this thin film, what causes its degradation, how it forms, how it degrades is absolutely critical to us. And sealing, one of our five big platforms. So understanding sealing materials, being able to design seals, being able to work with seals to provide the very best sealing function is absolutely critical. And then underpinning it all, a commitment to sustainability, a commitment to the environment. So those really are our eight core skills that we focus on in SQF. How do we use them? I just wanted to show one example. This one is getting a little old. We were 100 years old in 2007 and Tom wanted to have some major breakthrough to announce on the day of our 100th birthday. Unfortunately, he didn't think about this until about 18 months before. And he went to our chief scientist 18 months before and he said, Stathis, I want you to make the world's greatest breakthrough in bearing technology so I can announce it on the 16th of February 2007. And Stathis said, well, oh gee, that's a pretty tough term. I'm going to do that. And Tom says, I've decided what it's going to be, by the way. You're going to take 30% out of the friction of our bearings. And Stathis' first reaction was, Tom, our bearings are already far lower friction than anybody else. How on earth am I going to take another 30% out? And apparently Tom left them at that point and said, well, sorry, Stathis, that's your job. I've got a company to run. Well, Stathis went off and thought about it. And he did it. And he did it in that timescale. And we launched the first two reigns. We launched the group of bearings and cylindrical roller bearings on the day of our anniversary. Having done this in 18 months. And you say, I certainly said when I arrived, how on earth did you do that, Stathis? He said, well, a few things. I didn't cheat. I kept exactly the same external dimensions. I kept the same ISO, load carrying standard. You know, I had to make it exactly the same. But I used those core skills that we had in the company. So I first of all, optimised the internal geometry. I opened up the oscillation a bit crudely. If any of you are into the details of bearings. Broadly, if you've got a ball, it's running in a groove. And if you make the two exactly the same, you get colossal friction between the two. So you have to open up this groove a little bit. And the difference in the diameter between this one and this one basically is the oscillation. Now, if you open it up too far, of course, the whole thing starts moving around and the whole thing gets a bit loose. But because our machining standards are higher than most, he was able to open up that oscillation a little bit more than usual. We then changed the standard steel cage for a polymer cage. A cage is the thing that holds the balls or the rolling elements apart. You have to stop them from touching. Normally this is with a steel cage. Here you can see a polymer cage. So our knowledge of non-metallic materials, our knowledge of polymeric materials, enabled us to select and to choose and to validate a polymer cage to do that job, which we hadn't done before. And then perhaps more important of all, because of our lubrication knowledge, we changed the grease. We went for a really special low-friction grease that we actually had developed ourselves. And those three things together took the friction down actually by more than 30%, but in true SKF fashion we do 100 tests on 100 different kinds of bearings under 100 different kinds of conditions. And if the worst one is 30%, then we tell you that it's 30% better. Some of them I can tell you were over 80% better. And our range of energy efficient bearings really is second to none that you can get anywhere else. I want to say a few words about our overall technology strategy. Of where are we going? What are we doing for the future? What are our major new directions? Well, we see technology strategy in three different areas. If you think of somebody making an invention and of doing all the kind of early science that you need to do in order to find out if it's going to work, and then you start launching a product and you develop that product and it grows rapidly, and then you get into a volume part of the business. And to be honest, these aren't to scale. In our case, this volume business goes on for a long time. We do different kinds of research in each of those areas. For the volume business, what SKF is famous for is working with the customer to give the customer exactly what the customer wants in any application. And if you talk to our advanced customers doing difficult things, this is the thing they value SKF for most. We work with them, we get it right. It gives us an absolute nightmare from a production point of view. We have a far higher degree of product complexity than most of our competitors, but the customer really values it. And what we're really looking to do here is to be driven by the customer and to just fine tune that product to do just what the customer wants it to do. In the rapid growth area, clearly this is my challenge. This is me trying to create the next hundred years. This is we've got to make breakthroughs that really are driven by our knowledge, by our technology leading to new product breakthroughs. And here's what Cambridge fits in. This early stage, this set the strategy, go in that direction. You need to stick with this for a while because it's going to take you time to get up this curve, but you get in early and you grab the intellectual property, the patents, and keep the competitors ahead. So that's how we think about technology. Let me just say a few words about what we're doing in each of those areas. Up at this end, if you look at the kind of late stages of this growth period, what I'm really looking to do is to bring all these together. If you just take us on bearings, I have other people make bearings and they make bearings that look a bit like ours. I can really differentiate most easily by adding other bits to it, by adding an automatic lubrication system to the bearing, by adding a clever seal that takes a load of friction out of the bearing, by adding some electronic functionality. By taking and combining all of these and offering the customer a bundling of our technologies is a real opportunity for us to develop more and quickly. At this end of the technology-driven invention scale, at the early stages, we have initiated a thing called the Innovation Board. Actually, it was initiated before I ever joined SKF. Tom has started it, although it wasn't working all that well, but it was one of the reasons I wanted to join SKF. If you read the textbooks on innovation, they all say one of the key things you must do is to engage the chief executive. I found an SKF, a company that already had the chief executive, engaged. This innovation board is chaired by Tom Varton, myself, the three divisional presidents and the senior technical people are all on this innovation board. We are looking for a small number of really big strategic projects that we are going to focus very large amounts of investment on. The key members of the senior team are there. My guys are always coming to me with a business case thing. I think we could do this. I say, sorry guys, if it needs a business case to justify it, it's not good enough. This has got to be so blindingly obvious that it's right for SKF that I don't need a detailed business case to help me justify it to Tom. I say, what do you want me to do? I say, I don't know. I'll know when you show me it. I certainly know when Tom's going to accept it. He stops asking me what it's going to cost. He gets so excited about it. He stops asking him what it's going to cost. We just have to do it and I'll show you some examples of the things we've been doing. It's got to be a real game changing opportunity for us. It's got to be radical, but it's got to be absolutely blindingly obvious why SKF's doing this. This is not rocket science and I'll show you a few examples in a minute of the things we've been doing. We're only looking to do five or ten and these are maybe five year projects, so I'm only looking to find one or two new projects each year. I'm not looking for a very large number. I want to focus on a small number of real game changing things. Here's one of them. You see the target? We've done energy efficient bearings. Stathas had done that and taken 30% out of the bearings, but of course most bearings that we sell are sealed. Bearing spin round beautifully when there's no seal in them. You put a seal in it, it just acts like a break. Not much point in having really good energy efficient bearings unless you've got really good energy efficient seals. So our seals were already state of the art. Gawn, how would it be if without any extra cost we could take 40% out of the friction of the seals? Seems kind of obvious thing to want to do. Dan Reid was running our sealing business at that stage. I don't think it took too much to convince Dan that this would be a good thing if we could manage to do this. We could differentiate ourselves on this. Dan was really instrumental in getting this up and running with us. Here's where we are today. I won't go into the details of what they are, but this is the percentage improvement under various different conditions for various different temperatures and kinds of bearings and so on. You can see that some of them are 60% better, some of them are only 20% or 30% better. I think on average we probably hit the 40% already. We still work to do, but we are pretty much there of being able to offer low friction seals and these will go into production quite soon. Here's another one. We make one in four wheel bearings for cars in the world. We make as many wheel bearings as there are cars in the world because most cars are four wheels. For those of you who are less mathematical, I didn't mean to. I was kind of obvious. I didn't mean that. I just wanted to explain what I meant there. Given that the car industry is trying to save weight, what could we do? We'll go on then. Set yourself a target, take 30% out of the weight at no extra cost. Do you need a business plan for that? I don't think so. Where are we at the moment? We've got 30% out of the weight. We haven't quite got no extra cost. We're probably out of about a 10% cost penalty at the moment. That's okay for the early stages of this. That's okay for the luxury vehicles at the top end. This is important weight saving. It's weight saving in the young sprung mass and it's rotating weight saving, which is all more important than static things within the car. Then advances in condition monitoring. We are the world leaders in condition monitoring. What do I mean by that? I mean by sensorising the bearing, by listening to vibrations in the bearing, by measuring temperatures in the bearing, and using that as a way of predicting things that are going wrong with it. Our target here was to create a little bandaid, a little stick-on patch that stuck on the side of the bearing, that generated its own power, that talked to the internet, and then what is it you want to measure? You want to measure temperature, stress, vibration, oil condition. You tell me what you want to measure. I've got a sensor on there that'll measure it. All you will know is you will have the data on the internet, put it on your smartphone, put it on your computer, whatever. You will know the status of your bearing at any stage. Not just the bearing, if you're listening to vibration, it'll tell you about the whole vibration of the machinery around you. It'll tell you when you've got gearbox problems, it'll tell you when you've got pump problems, impeller problems, whatever. That's the concept, and we're pretty much there. It basically has got very, very clever power-generating opportunities, using essentially mobile phone technology to talk to the internet and some real clever novels on sensors. This is a nautilus variant for windmill. You can see it's not quite as neat as a bandaid just yet, but this is the prototype. This one's for a railway application. We've hidden bits within a lot of the existing parts of the system, anyway. It pretty much does what it says. Of course, the next thing we'd really like to do is tie this into our life model. What the customer really wants to know is what percentage of my life have I used up, and if I'm getting near the end, can I turn down the load in some way so that I can make it last a bit longer to get it into its next planned maintenance or whatever? These three that I've shown you are the only three I can talk about. These, as you can imagine, are fairly sensitive within SQF. These are the three that we've talked to the outside world about so far. I hope it gives you some flavour of the sorts of things that we're trying to do to make sure that we're still a premier company in bearings in 100 years' time. I'll finish this with a few words of introduction on our university technology centre programme. This fits in, as I was saying, at this end, at the strategy-driven direction end, at the very early fundamental stages of the development of any new area for us. We're really looking to underpin our own work with long-range work at the top science university, science and engineering universities in the world, so creating clear core partnerships with some major universities. When I joined SQF, I saw that we spent an enormous amount of money in universities, but it was very scattered. We supported a PhD here, there and everywhere. It wasn't working terribly well in many cases. Typically, I don't know, our Italian factory would want to start a PhD in their local university. Often, the person that had started the activity would move on to another job, and the poor guy doing the PhD would be working away for four years or something. After two years, our guy had moved on and he'd gone to America or something, and the PhD was left behind, and there wasn't much lasting impact. I saw what Rolls Royce had done in their use of universities. Rolls Royce did an enormous amount of work with universities. They had this thing they called a university technology centre, and they'll say the first one they ever did nearly 30 years ago now is actually on this site in high-temperature material. They went in saying, I will support at least five people for at least five years. When you say that to a university, you suddenly get their attention. You don't just keep talking to the lecturer level guy that's interested, the professor starts getting interested. Indeed, when we opened our university technology centre here, the vice chancellor came along to join our opening and to say a few words. The same when we've done that all of our other ones, Imperial, Lluvia, Chalmers, the vice chancellor's have all come along. The Rolls Royce model says, I fund five people, and then I'm going to go out and I'm going to look for partners. I'm going to look for customers or suppliers or people I want to work with, and I'm going to expect them to work with me on projects. So I'm going to try and find a person or a company or a number of companies, and they're going to support five people alongside me. So suddenly there's going to be 10 in this activity. And then I'm going to look for government funding. I'm going to look for either European Union funding or national government funding or whatever. And before you know where I am, I'm going to double it again. And I said to some of the senior people in Rolls Royce that I've known for a long time, I said to them, what sort of multiplication factor do you get on your money? And they said, well, typically a factor of, we tell everybody it's a factor of five. To be honest, I think it's a factor of four. And I said, that sounds pretty good. So you fund five and you end up with 20. And I said, yeah, that's about typical. So that's the model that we're trying to follow. It takes time to put in place. It takes a long term commitment. But it really creates serious partnerships, serious value between SKF and those universities that we're working with. So that's the goal of it. The goal is to get the university not doing the things that you would do anyway. This is not a cheap source of research. This is a partnership where the university are doing the things they're good at and we're continuing to do the things we're good at. But the two together really gives a major new opportunity for SKF. And the aim is to build long term technology leadership. I said, you know, the aim was to do five people for five years. Here at Cambridge, I'll leave Harry and Pedro to give you the details of what we're doing here. But typically the ones we've been working with for some time, Cambridge and London, we're already doubling the numbers of people that we're paying for. So alongside us are twice as many people as we are paying for, or overall is twice as many people as we are paying for, and building partnerships with key customers and key suppliers. So we haven't got as far as Rolls Royce yet. We haven't started getting the government funding in yet. We're working on that dimension of it. But we're certainly moving in that direction. Let me just say a little bit about each of those university technology centres and how they fit the core competencies that I showed you earlier. I told you earlier we were working on these eight subjects. And where are we now with our university technology centre programme? Well, Cambridge here today, it's about steel and heat treatment. It's about providing the underpinning fundamental knowledge that we need to enhance and improve our steels. The one we have in China in Tsinghua is at the moment smaller than the others, but it's about sealing materials. It's about elastomeric materials that are from making seals. People at this point usually say, am I worried about intellectual property? Am I worried about leakage of knowledge in China? And my standard answer to that is no, I'm not. I believe that although the Chinese have had a lot of press about stealing and ignoring patents, I believe the Chinese are changing very rapidly. I believe they will go exactly the same way as the Japanese went a generation ago. The Japanese used to be accused of copying everything and not respecting people's intellectual property. As soon as the Japanese started inventing their own things and filing their own patents, they started playing by world rules and now are an absolutely respected member of the international community playing by international standards and patents. And the Chinese are very, very rapidly moving that thing. So, yeah, this is a toe in the water. Yeah, I'm not doing anything really radical at the moment. A lot of the work we're doing at the moment is on analysing our competitors' materials and seeing how they respond to use and to ageing and so on. So I'm not yet generating a lot of intellectual property here, but I don't have any concerns moving forward in terms of, will this be successful and can we work in China? Llyw Llyw, right up in the north of Sweden, there is where we're doing our work on developing completely new concepts for condition monitoring, that's COMO etc. An abbreviation we use within the company, condition monitoring and censorisation of our bearings. They've brought together three completely different departments that had never worked before together and have offered us some real insights that we would never have had ourselves in terms of how we can think about our condition monitoring business in the future. Imperial College in London, we are doing those real detailed understanding of what happens at that point of contact. What happens as you squash that oil film? How does that oil film break down or not break down? What are the conditions that you can use? How can you develop the right lubrication? How can you model it all? Cos ultimately you don't want to do everything experimentally, you want to have the fundamental knowledge, the fundamental understanding to enable you to do that. Finally, Shalmer's on sustainability and environment. We as I say have a very strong commitment to the environment. We have launched a whole range of energy efficient bearings. The payback on those energy efficient bearings for the customer can be very, very rapid indeed. If you are for example using one of our low friction bearings in a large motor application that's running 24 hours a day, 7 days a week. If it's for a big compressor for an oil company, a big pump of some sort where the motor is running all the time. We charge a premium of between 20% and 30% for those bearings. The payback time on those bearings in terms of energy saving, the amount of power that you save is about two weeks. Who doesn't want a two week payback? The work that we're doing in Shalmer's is largely about developing that understanding and giving us the hard data to be able to sell to our customers the advantages of the products that we have to offer. I'm going to hand over now to Harry. Professor Harry Bodisha to give him his full title is the director of our university technology centre. Our assistant director is up at the back and we'll give a talk at the end, Dr Pedro Rivera. We signed the contract for our university technology centre right at the end of 2008 and the work began in 2009. I'm going to hand over to Harry to talk to you a bit about the background and steel and then I think to Pedro to talk a little bit more about the work itself. Do you have any questions for Ellen? I wasn't going to offer that opportunity. Go on, anybody like to ask me any questions. That's good. I can sit down. Thank you very much. Thank you. Today I'm going to talk about steel but my focus over the last few days has been on gold. I cannot concentrate on work, the rate at which gold metals are coming for Team GB. If you compare against the major powers who have much larger populations, we are actually ahead of them. Gold is of course very important. I don't know why because it's pretty useless material except in electronics for joining conductors and so on. I want to show you that iron is what you should be excited about. Let me start by asking you a question. Feel free to shout out a question and answer. Where does iron come from? Any other answers? I'm going to surprise you with the answer. Iron is made in the stars. This is the Milky Way. It's our galaxy that we are in. The sun is approximately around here. When the universe was created, the light elements like hydrogen, helium existed for billions of years. The heavy elements are all made at extremely high temperatures where you force the nuclei to come together. Iron is actually made in the stars. Temperatures are of the order of 100 billion centigrade and the pressures are enormous. That's what makes iron. Furthermore, I have to disagree with Alan when he said other materials are important. Iron is the most stable element in the universe. This was calculated by a person called Fred Hoyle in Cambridge who invented the term the Big Bang. The origin of the universe invented the term. Before that, he did extraordinary work on the stability of the elements. He proved that iron is the most stable element in the universe. Eventually, the lowest energy state of the elements is when the galaxy will completely become iron. If you are working on another material, you are doomed. Ultimately, everything will be iron. He would have won a Nobel Prize for this work, but when he invented the term the Big Bang, he was actually being derisory. He was trying to say that this theory is no good. He tried to build an argument against the Big Bang theory and failed in the end. Can somebody now tell me what this picture represents? Crystals. They are really beautiful to look at. But this is also a crystal. I will pass it around. This is a single crystal. I have a feel. It is a turbine blade which goes into an aircraft engine where it experiences a temperature of something like 1400 degrees centigrade. These are routinely made on the factory floor to enable very efficient aircraft engines to operate. The shape of that is nothing like the crystals that you saw on the slide earlier. Its shape is designed for aerodynamics. The meaning of a crystal is not that it looks extremely nice and beautiful, but the scientific meaning of a crystal is that the atoms are arranged in a particular pattern which is repeatable over a long distance. Here for example is a crystal because the atoms are arranged in a regular order. In this case the iron atoms are at the corners of a cube and at the center of the face. We have some foreign atoms here which might be carbon to make steel. Steel is a combination of iron and carbon. The scientific definition of a crystal is that we have a periodic pattern of atoms. Not a random location of atoms as you might have in a liquid. A crystal is defined by a periodic arrangement of atoms. It was the two brags in Cambridge who discovered a way of interrogating materials to see whether they are crystalline or not by putting x-rays on them. You will be visiting the laboratory where they actually did this Nobel Prize winning work. These are the patterns in which iron atoms are arranged in solid iron. The most common form of iron is where you have an atom of iron at the body center of the cube and at the corners of the cube. If you heat the iron up then this is how the arrangement changes. We now have atoms at the centers of the faces rather than in the middle of the cube. We go right into the center of the earth where the temperature is of the order of 6000 degrees centigrade and the pressure is enormous. Then you have another form of iron which is a hexagonal arrangement and it's the densest form of iron possible. These are just three different crystal structures of iron. There are actually seven different crystal structures which we can make but the others are not very common. Of course we can throw in many different elements into the iron. Here for example we have the carbon atom sitting between the iron atoms or we could substitute this with nickel or chromium or whatever. There are hundreds and hundreds of different crystal structures that we can produce in iron. This bearing has countless billions of crystals in it. If you looked at this in a microscope you would see the most incredibly beautiful crystals. I will show you some images later. But this is crystalline. It's shape doesn't resemble anything you would associate with a crystal but this is crystalline. The reason why we can use iron for many many different applications is because we can change the structure by either deforming it or heating it. You saw some red hot iron in Ellen slides putting magnetic fields applying a stress. It is such a versatile material because of all these crystal structures that we can generate. This is the sound of crystal structure changing inside the iron. This is the kind of sound that you might use in monitoring the condition of the bearing. If you want you can download this onto your mobile phone and use it as a ringtone. It's available on the internet. This is the sound of crystal changing. Because we don't have a furnace here I'm going to pass around another piece of metal which is called indium. If you bend it like this you'll be able to hear changes in crystal structure. It's the changes in crystal structure which tell us that we can make a vast variety of alloys with different properties. The sound that you just heard or the vibrations that you get when you get damaged in bearings is of course the condition monitoring that Ellen was talking about. When you install these windmills out in the oceans and so forth you really want to stop the rotation at the point where you think there is significant damage. You don't want the damage to grow and cause much bigger damage into the bearings. All that has to do with the crystalline structure of the material. This is just to show you that these crystal structure changes also cause a deformation. Here is a piece of metal. We are going from very cold to very hot. There's no components inside this piece of metal and that's changing its shape just by altering the temperature. The fact that the patterns in which the atoms are arranged are changing which produces that origami-like feature in the material. Now when it gets cold it will become flat again. These changes in the order in which atoms are arranged are real. You can feel them. Some of you might have seen advertisements for spectacle glasses made out of shape memory metal. If you bend them accidentally you can just put a hair dryer and it will come back into shape. One day cars might be made like that. This is the periodic table of the elements. I explained to you that you will never actually find pure iron in any real application. We want to engineer the material in order to provide the right properties. We can add more or less anything that is on this periodic table as long as it is not radioactive into the metal and design new properties. We can create solid solutions. When we add sugar to tea it dissolves. That's a liquid solution. Similarly we can add elements into iron which dissolve into the solid iron and form solid solutions. If we add a lot of that element we might actually create new kinds of crystals inside the solid iron and therefore alter the properties again. There is a huge amount that we can do. You can do this in two different ways. One is what we call bucket chemistry. That means you just add and see what happens. The possibilities are infinite because you not only have a huge choice but you also have different concentrations that you can add and you could be there forever. That's not the way to go. For many many years, I myself started working on iron in 1970, for many many years we have been developing the theory that enables us to predict what should happen. I don't want to get carried away and explain that we can do everything. That's just not true. The problem is incredibly complex. We have developed sufficient theory for us to reduce the time scales in which we can get to a product. Then what we do is we express that theory in computer programs. This is just a way of telling the computer what to calculate. This is one of my first-ever students standing next to a supercomputer that we are using to do calculations. By doing this, we can try experiments by calculation before actually spending a lot of money to do critical experiments or even to take it to the next stage of development which costs a huge amount of money. It wasn't long ago that I was at Alan's Innovation Board trying to get them to spend a huge amount of money to make a very large amount of material for bearings, a new material that we've designed. I think it will happen. This is a material that we created about 15 years ago. It has two forms of iron in it. One is the body-centered cubic structure that you see here. The other is the high-temperature form which we've been able to preserve to room temperature. We've got this beautiful mixture of two different kinds of crystals, an intimate mixture on the scale which is one millionth of a meter in size. This is one millionth of a meter. The finer you make the crystals, the stronger a material becomes because then the evidence finds it difficult to move across the boundaries between crystals. Even more important is that they become tougher. The material becomes tougher. That means if I hit it, it can absorb energy. Window glass, for example, is extremely strong. But if you hit it, it shatters. That means it doesn't absorb energy when it breaks. We don't call it tough. The reason why we make cars out of metals is because they absorb a huge amount of energy should you have an accident. If you have an accident, you are protected but your car is a write-off because it is designed to deform and absorb energy. We want finer and finer crystals. When we created this particular structure, we decided to apply it to a totally different kind of a railway line. Railway lines usually contain a phase called cementite, which is very hard but is not tough. You can get easy fracture. Here we don't have any of that hard phase. We have these two beautiful and very fine crystalline phases. In this image, you are looking at individual atoms. We can go to magnifications where we can see individual atoms and then we can colour those atoms. According to what kind of anatomite it is, whether it is carbon or iron. Here we are plotting the carbon as the red and the green is the iron. This is nothing fancy. These days we can do this routinely. The point is that not only do we have two different kinds of crystal structures, but we also have two different chemical compositions. One crystal absorbs carbon more than the other. That is why we have been able to retain that high temperature phase even to room temperature. Normally the phase-centered cubic form of iron is only stable at very high temperatures. Here is a real application of that particular structure. We can show that rolling contact fatigue is really good for this structure. What is rolling contact fatigue? Alan explained that we have got a rolling element here, which turns around and we have got a raceway. When I turn this, every time this rolling element goes over the surface, it induces a stress underneath the surface. It is a very large stress, of the order of, I will explain how big a stress it is. It does that all the time the bearing is rotating. Eventually the metal fatigues. It gets tired and it might break. With this structure, because we do not have any hard particles, it is very good in rolling contact fatigue. Rolling contact fatigue is not just a problem of bearings, but also when a wheel goes over a rail. You can see that the new structure outperforms anything that existed before. This is about wear resistance and the wear rate is also reduced to a negligible level. This is the new material in service in France. Have you been through the channel tunnel? Any of you been through the channel tunnel? Next time you go, remember me because these are the rails that are designed which are in the channel tunnel. Railway lines are strong and they are tough, but bearings are really strong. This structure is no good for making a bearing. We have got to make it much, much stronger. I explain to you where the strength comes from. What should I do? What do I have to do to make that structure even stronger? Make it finer. We know how to do that. Just to illustrate to you some scales first, I showed you a structure which was a millionth of a meter in size. I want to go to a structure in which the crystals are a billionth of a meter in size. There are nine zeros here. We can do that by making the crystals grow in the solid ion at a much lower temperature. We did some calculations on what is possible and what is not possible in a reasonable amount of time. Alan might not want to do a heat treatment which takes 100 years. You keep wine for 100 years and then you sell it for much more money. Let me explain to you the meaning of strength. Newton was in this university. The weight of an apple is approximately a Newton. This is a scientific measure of weight. An apple weighs a Newton. If I put it on one square meter, then the stress is called one pascal. An ordinary steel would be able to support the weight of 200 million apples on one square meter. A bearing steel has to be able to support the weight of 3 billion apples on one square meter. Probably you end up with just apple juice in those circumstances. We are talking about very strong materials, incredibly strong materials. If I look at this bearing again, the stress is 2 gigapascals, that means 2 billion apples per square meter. If it turns around at 25,000 revolutions per minute, and let's assume there are 20 bowls here, then the total number of pulses of stress that the steel will experience is of the order of 500,000, half a million in one minute. Imagine if you are being punched half a million times every minute with a stress which is 2 billion apples per square meter. The stuff that you are producing is absolutely remarkable. No one outside of here knows about that because it is beautifully designed. It's a reliable material. It's not like your computer system which has to have updates every so often. The fact that you don't know about it is a good thing because you don't need to know about it. If it was unreliable, you would need to know about it. This is the structure that we are hoping will make the next generation of bearings. Here we have reduced the scale of those crystals to 20 billionth of a meter in size. We still have two different kinds of crystals with different chemical compositions as well. I don't know if you've heard about carbon nanotubes. These are very fine tubes of carbon which have had a lot of publicity but are not terribly useful. We are finer now than carbon nanotubes. This is an incredibly fine structure. I want you to remember this image until dinner time. There is a surprise for you but I'm not going to tell you anything about it. This structure is what we are hoping will lead to the development of a totally different kind of steel for bearings. The current steel bearings have carbides. These are strong hard particles but brittle particles. Here we have none of that. We have these two beautiful crystals intertwined, producing the strength and indeed the toughness. It's the same structure but much finer as the rail steel. I'm confident that we will have good rolling contact fatigue performance and wear performance. We have to prove all that. Scientists have the tendency to make claims long before they are justified. If you look at papers from nature and so on, they will claim that they are going to solve all the problems of the world. A few years later, there are little consequences. Carbon nanotubes' story is like that. One of the reasons is that although in a laboratory we can make materials, when you scale it up everything changes. Absolutely everything. One of the key requirements of anything that we do is we must be able to manufacture the material on a large scale in all three dimensions. That means you cannot use incredibly severe processing or incredibly rapid heat treatments etc. What I mean by large dimensions is that this is a photograph I took at the oil sands mines in Alberta in Canada. This is a huge struck but there is no magnification marker there. Just to put on a magnification marker, if you focus on the size of the wheel here, this is me. I want to be able to make materials which can be large in all dimensions and manufactured easily hundreds and thousands and thousands of tons. Indeed, we have done that. This is the best armour that you can buy to protect against really terrifying threats made from the structure that I showed you earlier. The very fine crystals of 20 billionth of a meter in size. This is an aircraft engine for a civilian aircraft. You may not know this but most of the air doesn't go through the engine. It simply provides the thrust for the engine to push the aircraft forward. Only about 25% of the air goes through the engine and is burnt with fuel to cause everything to rotate. The critical component here is the steel shaft. The reason why most of the air goes outside the engine is because you want to reduce the noise from the aircraft engine. That's why military aircrafts are much more noisy than civil aircrafts because you can't afford in a military aircraft to have a very large engine. Here you could have a couple of people standing in the opening. It's very, very large. We want to make it even larger and that means that the torque, the twist that you provide on the shaft is going to be even larger in the future. We are trying to develop this concept of that very fine crystalline state also for aircraft engine shafts. You can see here we can make them large. No problem at all. It takes 10 days to heat treat this but we can make them large. These are heat treatments being done in Germany and this is the next stage. It's got to quite an exciting level. We have indications that this material will perform well but we have a lot to do before something can go into a critical application. Now I explain to you, sorry, this is just to illustrate some equipment that SKF has installed over here so that we can actually do initial measurements of the rolling contact for the properties. This is Ji-Hoon Kang, who is a PhD student working on this and this is Pedro who will talk immediately after me. Here we put our experimental materials. We subject them to contact stresses, rotate the system and monitor the kind of damage that will happen in the material under the surface as we develop it. This is just one of the set of experiments that we have to do to prove this material and there are a lot more planned within SKF, the manufacturing trials, etc. I said to you that it takes 10 days to produce this structure. We are also working on making it faster but that may not be an advantage if you are making large components. I would like to get the structure even finer, even finer than 20 billion of a metre. Our calculations tell us that we can do it but it will take 100 years. We have produced this material and one sample of it is in the Science Museum in London and one in my office. At room temperature we started this experiment in 2004 and it will be completed in 2104. What I have to tell you is tell your children and your grandchildren about this story so that they can verify whether the experiment has worked or not. The material has been polished completely flat and if our calculations are correct then by 2104 you will see surface upheavals on the piece of steel. Then my soul can rest in peace. Thank you very much. If you have any questions I would be happy to answer. Can you tell us a bit more about how it is developed and produced? The production part is the armour steel. The production part is straightforward. There is no new technology in the production of the steel. The new thing is basically you have to transform at a low temperature, almost like a pizza oven temperature. Whereas normal heat treatments for steels are done in furnaces and so on, this you have to hold at a temperature of around 200 degrees centigrade. That is the key. That generates the very fine structure that we see. How can you use from the thickness from a normal steel to that steel to save weight and still give the same protection? In the case of armour we have a parameter which we call the ballistic mass efficiency. That means if you take a standard armour for the same ballistic performance how much can you reduce the weight? We are a factor of three times higher than standard armour. That is now commercial. That would say gasoline too. Absolutely. I see the vision in the iris. Pedro, do you want to come in? Pedro is the assistant director of the University Technology Centre. Thank you for coming and visiting us. I am the assistant director of research at the University Centre. We started in 2009 with a number of projects. I just want to give you a general flavour of what is what we are doing. I would like to focus on what Alan mentioned was the intention of this group, which is to develop some of those seminal concepts that could later develop into technologies that lead to products that are better for SKF. I will start my story with just one regular bearing on what you actually want to avoid, which is failure. As you have seen earlier, failure originates from the contact between these rollers and the contact surfaces under big pressures. As mentioned earlier, if you actually apply very large stresses, Harry mentioned I think two billion apples per square metre. This pressure will localise in very small areas, about one square millimetre. Even through what Alan has mentioned is correct, that lubricant in theory should be the key element, or the key concept to look at lubrication. It is the steel that fails. What we want to do is to create something to avoid that failure. There are many elements that are in the atmosphere that can actually diffuse and go into the bearing that actually produce failure. Hydrogen is one of those. Hydrogen, as you know, Harry mentioned that iron comes from the stars. If you look at the same Fred Hall's theory of universe formation, everything starts from hydrogen. All the elements transmute from hydrogen into helium and then you get to iron. When we face reality, we have a bearing, like this. This is the bearing surface that contains very fine grains, which are represented by these hexagonal bits. Those grains are usually about one hundredth of a millimetre. Then you have all sorts of features within those small crystals that we call microstructural features. For example, Harry mentioned this brittle phase called cementite, which is everywhere in the microstructure. Hydrogen is one of these elements that is everywhere. Hydrogen is in the atmosphere, but most importantly, hydrogen is in the lubricant. You certainly have heard that oil companies refer to oil as hydrocarbon molecules, because you are combining hydrogen with carbon forming long chains. What really happens is that when these bearings are subject to high pressures, sometimes those molecules that are in the lubricant just decompose and you get free hydrogen. Hydrogen is very tiny. Hydrogen is the first element in the periodic table that you saw today. Actually, it is very mobile and it diffuses in the microstructure and it produces damage. That damage causes those failures that we saw in this earlier image. One of the first projects that we started to work on in the University Technology Centre is to find to devise a solution to prevent this hydrogen damage, which you see here schematically represented and leads to failure. Hydrogen is something real. It is not something theoretical. In the University Technology Centre, we not only use state-of-the-art technologies to visualise a problem. For example, this is a three-dimensional atom probe. What this means is that what we are mapping here are atoms. Each of these small points are atoms. These d stands for deuterium, which is a form of hydrogen. This hydrogen is actually diffusing into the small crystals that comprise this bearing and produce a damage. One way to prevent that damage is to immobilise those hydrogen atoms. From the literature, we know that when we add this compound, which is titanium carbide, which are these small dots here, when we produce these small compounds that lock the hydrogen and this inhibits the damage that hydrogen can cause. One of the contributions from the University Technology Centre has been actually to devise by Blancashos, who is sitting at the end of the theatre. One real bearing still that contains these type of particles, which are titanium carbide-like, but in her case she devised a steel that is based on vanadium carbide. We have proven that vanadium carbide actually inhibits the damage from hydrogen. We have plenty of experimental evidence to do that. Actually, this technology has been filed and a patent has been requested. That is one example of one of our two very first PhD students. Blancashos is now considering whether she will join SKF, which was one of these long-term partnerships that Alan was mentioning. Our second first PhD student was Hansen Huang, who is sitting there at the end of the theatre, who is also considering joining SKF as well, and who also worked in us in solving one of the seminal problems that also led to a patent request. Here, as you have heard from Harry, you can see a bearing. These bearings contain these very complicated crystal arrangements inside them, and they are very carefully engineered. One millimeter is a unit measured that you are very familiar with, and the micron, as Harry mentioned, is one thousandth of a millimeter. This is an optical image that you obtain from taking a portion of this bearing still, cutting it and polishing the surface. This is about one-fiftieth of a millimeter, this line. Here you see these very fine needles that are arranged here and there. These needles are the ones that result from a phase transformation. What Harry correctly mentioned is that one thing we would like to do is to create finer and finer crystal structures. For example, these are very long needles that are the result of this steel, which sound you could hear that was slowly transforming, and forming these needles that actually give the strength to the steel. Here it is important that you realise what the size of this piece is. This is a good-looking fellow that is showing that her actually displays the thickness, which is about one-tenth of a millimeter, that's one hundred microns. Forty microns, sixty microns, people with very fine hair do have her thickness in the range of fifty microns, which is about one-twentieth of a millimeter. When we design these new bearing steels that are very fine, we started, Huang started taking as a reference this actual real bearing steel, microstructure, which we call bayknight, and this is what you see in some of these steels. I want you to look at the scale marker because this is pretty important. This is point five microns, which is half a micron, which is one-twothousandth of a millimeter. This is about one hundred times finer than your hair. These are the scales that we are aiming for and that we actually realised, which is about ten times, actually, when you look at the thickness of these features of these small crystals, they are about one hundred times finer than the usual bearing steel. Now, what is important here is that one of the short comp... So this is not new technology, as mentioned earlier. This is something that Hari had been working on for, since fifteen, twenty years ago at least, or at least this type of development is from one, two decades ago. But what Huang did, which is breakthrough, is to devise a way to produce this crystal structure much faster and using less expensive elements. So, for example, one of the elements that in the original invention of this steel was considered is cobalt. Cobalt is rather expensive. What Huang has done is to demonstrate through thermodynamic calculations, through computer simulations, that we can actually produce this steel cheaper and faster. So we don't need to wait one hundred years to go back to the British Museum to realise whether we form the microstructure, we don't need to wait ten days. We're talking about a few hours to produce the same very fine crystal structure with more cheap elemental additions and possibly with better properties that we would realise soon. So, to summarise, at the centre of our research is our ability to produce bearings. We are doing research into fatigue and this, in the case of Blanca, led to the conception of hydrogen-resistant steels. So we can prevent these damaging effects from this very fine atom, which is hydrogen that really accelerates damage. In doing so, Rashid Batti, who is sitting towards the end of the theatre, is using some of the finest characterization techniques that are available on Earth. You will see some of the finest electron microscopes towards the end of this day in the former Maxwell laboratory where we have our high-resolution microscopes. This new knowledge is leading to new materials that could be commercialised and most importantly, this would lead to the understanding, so that new understanding, so that we ensure that your products are reliable to your customers. Thank you very much.
Lectures by Alan Begg, followed by Harry Bhadeshia and Pedro Rivera, at a meeting held in Cambridge University on the 8th of August.
10.5446/18594 (DOI)
Thank you very much for very kind introduction to meet for me. I'm very glad to be able to stand here to see all these very enthusiastic and future promising students also there is some faculty here. So when Professor Sasaki asked me to give a seminar today just about a month ago then I said yes. See I like to talk whatever I like to talk. So today I'm not going to talk anything in technology. In fact I don't know any technology in depth but I like to talk something I think it's useful for you so that I very briefly cover some technological issue because as you know I've been involved in the upper part of the whole spectrum of steel business so we call it chemical metallurge or steel making that area so probably my talk will be a little bit biased because of my primary interest area but I try to be as general as possible. When we say the technology development probably there are many ways to classify the technology developments but this is one of the ways. Facility limited which means if you have a better facility you can produce more of the products in better quality at the lower cost. So for instance if I see the car racing for instance you can easily say which car will be running faster than the others because the facility this must be running faster than the other two. But in this case the solution to this problem is that of sorry. This is a solution. If you have money you can solve the problem get the better facilities and then you can produce more in higher quality at the lower cost. But many of the underdeveloped countries and then some developing countries if I apply this one to steel industry they usually in this category. So when I was first joined steel industry in Korea in 1970s the technology is all of the technology already embedded in the facilities. When we get the better facility the facility comes with its own technology so we just run it and then we get the products. So that's how to run how to maintain the equipment that's the best technology we could say. But there is another one, knowledge limited technology that means everybody has the best equipment, the similar kind of a car but who drives faster. It doesn't depend on the car itself it depends on the driver. In this case the solution is that the facility is a human resource. How good you are how good driver you are. This is one of the ways to classify the technology development. So the Korea is now in this area so most of the steel companies in Korea have the best equipment available in the market. But it doesn't mean that they can produce the best quality of steel. So they have to work hard to make the most of the facilities. So now there are also the number of ways of classifying the patterns of technology development. This is the one way. Suppose this is the size of the experimental work. Because you have this size of materials or the specimen for research. Suppose the outcome of your research is very promising then you can go for further development. You make more of those products. And then eventually if fully successful you can go to the production, commercial production. So in the commercial production and the research the size or the amount of the work you do, the amount of material you deal with is about the same. But here you deal with only one piece of the material but here are many of them. But there is another one. This is the physical size when you do the research, but it's promising and then you go for one step further. For instance pilot plan type of development. The scale is much bigger but for the commercial production it's much, much bigger. So these are two different types of patterns for technology development. Still falls on this category. So that's why what you think you've developed a very nice technology but the steel company is not very much interested in your technology. Because there is a very wide gap in between. So for patent one when we say when we contrast science against technology then for patent one the many times science and technology are overlapping with each other. So sometimes you cannot distinguish whether it's science or technology. But for patent two so science and technology sometimes widely separated. So scientifically your work is very nice, promising, but technologically you still have to go. This is my view that the socio-economic point of view is the backbone material of the society. So when you go out there and then you can look at whatever you like to look. Just imagine if there were no steel available what shape of the society would look like. So it has to be backbone material. You don't see your backbone spine but there is a backbone. And then technological point of view is an ever evolving material. I ran the 12 years old car. My car is 12 years old. And then the I don't know who runs the very new car. Suppose you run the new car. You do. If you look at the car itself you may say that this is a car body. If you peel off the paint there must be steel in it. And then 12 years old car and then the brand new car still is steel. But it's very different. If you produce steel, 12 years old technology and then try to sell it to the car maker they wouldn't buy. So it's gradual, you don't see very much but it's evolving. And then global environment for environment point of view the iron or steel is most abundant available material in the nature and then also the most recyclable material. So in this regard steel is very environmentally friendly material. Now some consideration of the future of steel, not the steel product, it's the steel production process. If you look back maybe 20, 30, 40 years from now a brass furnace used to be quite small. The brass furnace is about 400 years old technology. Right? 400 years old technology. It's gradually developed, become bigger and bigger. But the brass furnace operation, very important technology is how this burden moves inside. Because at the bottom it's a mat and a sink and then at the top it keeps pouring the raw material in the body, the things are sinking. But not the very uniformly thinking it's a sliding and gliding and all of the things happening inside. So it's control of those movement of a burden in the furnace is quite demanding, difficult. So because of that the size of the brass furnace was limited until the people have adopted some technology from civil engineering. In civil engineering they already had a very well developed technology for land sliding. So they adopted the metallurgical engineer, adopted the civil engineering technology, land sliding technology and then applied it into the brass furnace so that they can increase the size of the brass furnace. The other one, up until the 1950s most of the steel was still produced by the so called open house furnace. It's quite nice but very slow. But even in the more than 100 years ago they already knew that when the pure oxygen could be used for steel making then the steel can be produced much faster way. But the oxygen was very expensive. But in the 1950s someone developed how to produce the oxygen in much cheaper price. This is so called the tonnage oxygen. So this tonnage oxygen was employed in the steel making combined with the steel making technology so that the basic oxygen furnace has been developed. Now this is a triangle, not a phase diagram but anyway the phase diagram looking at the triangle. So suppose this is a revolver furnace, open house furnace, old fashioned and then basic oxygen furnace here and then the electric arc furnace there. So all of you know how to read this one. This is how the steel making technology is moving along with time. For instance a long time ago this is the main steel making technologies but if you look at this one in the 1950s the POF, basic oxygen steel making technologies was the major technologies. But in 1950, in 2000 it gradually moving towards electric arc furnace because the iron still scrap has become more and more available and then the steel making technology with the scrap has been developed a little bit further so that they are working in this way. So someone has predicted that in 2030 most of the steel will be produced through electric arc furnace away because more and more the steel scraps will be available. If I take the Japan as an example, one of my friends in Japan told me that in 2030 the steel scrap to be collected in Japan is enough to cover the steel demand in Japan. If you process the steel scrap using the arc furnace, if you don't worry about the steel quality but the problem is quality. If you use the steel scrap, inevitably you will have some impurities which is very difficult to remove so that the quality of steel is not comparable with steel coming from iron ore but the technology has to be developed further. So very recently the new technology has already been available in the market but some I think some Indian steel companies have already adopted this one. So this is a basic furnace, this is an electric arc furnace. They combine them together so that one single furnace has two sources. By putting the oxygen length it becomes a BOF and then take it out and then they swing the electrical electrode in and then it becomes an arc furnace. So one furnace functions to a steel making process. So in this case you can use either molten hot metal and then the steel scrap depending on the economic point of view or the quality point of view. So this is a very simplified iron steel making process. Suppose we are going to make an iron but if we move this direction is oxidation, move this way reduction. So iron ore is fully oxidized from Fe2O3 so we have to make iron using Fe2O3 and then we have to remove this oxygen from iron oxide. Usually we use carbon. Carbon loves oxygen so that the carbon takes oxygen out and then leaving the iron behind then we can produce iron. This is very ideal condition but our technology, current technology is not good enough to hit this point. So we usually go with the overshooting like this. We produce iron but contains too much carbon. So this is called iron making mostly from brass furnace and then it contains too much carbon. We have to remove it. To remove oxygen we supply carbon. To remove carbon we again supply oxygen. We blow the oxygen but again we could remove the carbon but it contains too much oxygen. Now we can't use carbon again because if we use carbon we will keep back and forth forever. So we have to use some different things usually aluminum or silicon and then some other metals which is very good for reacting with oxygen. So in many cases we use aluminum. Then it comes to this one. This is called refining process. This is a general process. And then once you are happy with the liquid steel then it goes to the casting to make a solid. Now a couple of weeks ago Professor Chang-Wong Chang gave a lecture very intensively. So if I literate a little bit of it and then suppose fine products in 100% in terms of a production cost and then the hot metal to make a hot metal you have to spend 75% of the total cost. And then the further processing to the steel making and refining once you got the steel cast it's already used up to 90% of its total cost. Further processing about 10% more cost. Why this happens previously this was not the case but if you look at the raw material price about 10 years ago in 2000 the price of iron ore was $18 per ton but now it is $160 per ton. For coal it is a caulking coal in 2000 it was a $40 per ton but now it's very close to $300 per ton. So in 10 years time this is how the raw material price has changed. That's why the iron making process is so important these days. So some people say that only those steel companies will survive which can develop a kind of technology which accept raw quality raw materials because raw quality raw materials are cheaper. So there are several processes that have been developed alternative to brass furnace technology. For instance fast matte I don't like to cover too much detail. They produce sort of a hot GRI but then the rotary furnace this has been developed by Japanese people and then it is also one of the promising technologies alternative to brass furnace but they produce a steel iron nugget not the liquid hot metal but the solid iron nugget. And then this is a finex. POSCO is putting lots of effort into it to develop to complete this process. This is just to replace the brass furnace. The heat the professor Chang O Kang has already explained some detail of it. And then for steel production not the iron production but for steel production but the current technology is either BOF or the electric arc furnace. As I told you in the future these two furnaces will be combined together so that you can take iron metallic iron coming from iron ore also the metallic iron source coming from steel scrap then you can produce any kind of steel quality the long product or flat products or whatever. Okay this is another view of the whole process of steel production process. Role material comes at the room temperature iron ore coal limestone ore comes at the room temperature. Temperature is raised quite high and then even further high for steel making this is a 1600 degrees Celsius and then you cool it down to make it solid and then comes to the room temperature and then heat it up again for hot rolling this may be about 110 or sorry 100 or 1200 degrees Celsius level and then cool it down again for cold rolling and then further processing. So heating up and then cooling and then heating up and cooling. So is there any way to produce the steel without going that high temperature? So this is what we do at the CSL the calcium ferrite route. So when you look at the phase diagram of calcium ferrite so this is the phase diagram at the 1300 degrees Celsius. So we can see that the very wide liquid area so the liquid area if you control the oxygen potential right direction then you can produce metallic iron. And then the further reducing the processing temperature so we are working on the oxysulfide route. So oxysulfide when you look at the phase diagram oxysulfide very brief one this iron side, sulphur side and oxygen side there is a two phase region so if the composition is brought to this area you can have a metallic iron and then oxysulfide melt together. In other words oxysulfide melt may be able to precipitate out the metallic iron. So the I am particularly interested in this process because in the future the steel industry should use more of iron steel scrap. Steel scrap inevitably contains copper in it. Copper likes a sulphur more than oxygen so when you have enough amount of sulphur in steel the copper can easily combine with sulphur to become a copper sulphide. So removed copper using in this way and then the steel may contain excessive amount of sulphur in it and then we have already developed good technology for disulfurization so we can remove the sulphur. So it is a quite future oriented one so we are also working in this area as well. For continuous casting so now we are actually employing the free form of liquid steel. What I mean is up there we have a ladder molten steel forced to the tundish by opening the hole and then the liquid steel forced to the mold also the gravity fall because of the free fall the liquid steel is agitated it is very turbulent condition. That turbulence creates lots of problems to the steel quality. Is there any way to replace this free fall mechanism? So in CSL we are actually developing this technique so the tundish is not at the top of the mold the tundish is beside the mold so that it is connected like this. It is a siphon process so in this way the steel can flow from here to there in a much gentler way. So we don't have very minimal turbulence in the mold. That is the technology we have the technologies we are working on in CSL for future oriented view. Now I will change my talk. What are you here for at GIT? Why are you here at GIFT? There are many other departments even at Poster. Why did you select to work at the GIFT? Is it because of a very attractive scholarship package? Or you expect that you can find a job more easily than the other department? Or do you really have steel in your heart? Which one is the major reason? The major package finding a job if there is the major reason probably the reason I wouldn't be very impressed. So when you finish your study either with master's degree or the PhD degree you will be branded with the mark of steel in your forehead here. You must be happy with it. Okay, the steel technologies, this is the case of Korea. I just take Korea as an example, the steel industry in Korea has been in followers position. In 1960s, 1970s, 80s, 90s, even very close to this point the Korean steel companies or industry was very faithful follower. They worked hard and then they just benchmarked, that is the advanced technology. They worked very hard to catch it up. And then also the Korean steel companies were very clever. Probably there will be a number of different technologies available in the market, which one I should follow. If you follow the wrong one you will be in big trouble in the future. So the Korean steel companies, the industry has been very clever in choosing the advanced technology so that they have wrote many of success stories. Look at POSCO, quite a good example there. Because you have to tackle a kind of a problem, I don't know what it is, it looks very difficult to solve and then this person doesn't know what to do. But you can think of several different ways. This is one way, oh there is a solution. So go and hit the button and then solution comes. You don't care what this means. As long as you have the answer you are happy. This is one way you can choose from. And then when I was the junior engineer in the steel company we had a lot of problems. When a problem comes up nobody knows how to tackle it, how to solve it. And then the boss of the company asked the Tokyo office of the company, find a retired foreman, well skilled and then he came and then he told us, oh this is the way to go, this is the way to go. We just follow very faithfully. But we didn't care, we didn't actually know why it happens in that way, what is the reason, don't worry. He just asked us to switch this way and we did and it worked. We are happy, he returned back. So very quick and easy. What about the, what if another problem comes up? No solution. We have to find another retired foreman out there. Okay, another way, there is a help button. Just hit it or just sneak what the other person is doing. And then you can get a solution. If you are not smart enough, even if you see it, you don't know what it is. So you have to have a little bit of knowledge. So this is better than the previous one, but still, if you don't have this person available, then you may not be able to solve the problem. So also easy and quick and then probably you may not be able to solve the another problem. When it comes. Okay, suppose there is still another way you are thinking and then you are focusing and concentrating and then try to use, eventually you can put forward a solution. That solution may not work and then you come back and then keep going back and forth and then you eventually get the answer. It's a slow and steady, but if this is the case, the next problem comes, probably you will be able to solve it for yourself. I hope all of GIFT students should work and they try to solve your problem in this way. Actually the way you take now actually shapes future you. Which path you take. So hopefully all of you take this path and then we call this kind of person a man of ability, innovative and then creative type of person. So now the steel industry in Korea has gradually moved. Now we are at the leaders position in some sense. Leaders are usually very lonely. All of these aged people here sitting in front row and then second row and third row as well. And then these people are all standing at the front row in their chosen areas. They are lonely. They try to find a person to follow but there is no, it's a wilderness out there. So they welcome to the future but you don't know which way to go unless you are very nicely and then very well prepared. So you know what the history is, suppose you are standing here, this is the history and then the very stupid person will project in this way. Okay, the future will go in this way but that is not the case. Probably you can, the clever person will, I don't know, do you know what it is? So you have to be prepared for the future. So again, I'd like to see each one of you to be a man of future steel, not like this, not like that but like this. So this is a very famous curve, S-curve, incubation period, developed period and the module period, so that. So this is a time or effort and then the level of development. So if you are at this stage, this is the amount of effort you put in and then this is the effect, achievement, small input and the large output. If you are at this side, you will be very happy, small input and the large output. If you apply this to your research, if you are at this stage, quite a lot of effort in input but outcome is small, very shallow. Look at the, when I, the user, mountain climber, rock climbing, rock climbing as an example. So in this case, for a given length of time, probably you can climb up a little bit, do quite a distance but if this is the case, the same amount of effort, probably you can progress a little bit. This is the steel. Okay, the steel. That's why you should work very, very hard like this person climbing up a rock. Don't try to be like this. Steel is in fighting an uphill battle. Suppose you are, how do you call it, fencing, fencing. It's in old days in Europe, they fought with a very long, narrow sword and then if you are up here, then another person is down here or you are much in better position. But if you are down here, looking up, you will be usually in big trouble. Okay, so at least academic arena, the steel is in fighting an uphill battle. We should admit it. For instance, when I was a university student, department was called Department of Metallurgical Engineering, finish. And then it changed the metallurgical engineering and material science. And then the material science and metallurgical engineering. And then the material science and engineering owned me. And advanced material science and engineering. So these are the level of steel covered. And then in some universities in the North America, for instance, even the East department disappeared. And then some of them have joined the chemical engineering, some others joined the mechanical engineering. In this case, steel is falling like this. This is the real situation at the university. But steel is the material not to be abandoned. But the material to be reinforced, as I told you because of the number of reasons. That's why we have established or founded GIFT and then all of you are here working for steel. Science and technology, when we apply the science technology to steel, these are the number of atoms you are dealing with. This is the number of atoms. When you deal with this number of atoms, there is a tangible amount of materials. It's one more of material. How heavy the one more of iron? Anybody can say? Very precise. Roughly 56 grams. You can touch and feel. So this is the one. And then the microstructure and then the molecular level and then abinitial, first principle kind of a work. So if we can cover a very wide spectrum of technologies for real material here and then the, sorry, and then this would be very nice and then this is our hope. But in many cases, it's discreet. We know something about this area, this and that, but we don't know much about some other areas. It's disconnected like this one because we have only these types of knowledge for this area and then very shallow knowledge. A little bit deeper knowledge in this particular area, something like that. But it is a general, the people are generally saying that if your knowledge goes that depth and then the application of your knowledge will be covering only that width and then it will be coming deeper and deeper and deeper like this. So we can say this is the depth of knowledge and then this is the breadth of applications. It will be very, very ideal if one person or one researcher can cover all of these, the depth and then breadth. But unfortunately, it is not the case. So someone has a very good knowledge in this area. Someone has a good knowledge in this area. Someone has a very nice knowledge in this area, but not all of them by one person. So this is why we have to do some collaboration and group work. Some people working in this area and they're saying some word and then the people working in this area. Initially, they are from different words, but they should understand each other and then combine them together to develop some innovative and creative technology. So this is a technology breakthrough. So this is usually S-curve. If this is all, then this technology will be saturated here. So now we have a technology breakthrough. If we have a technology breakthrough and then we can make another takeoff like this, another S-curve. Otherwise, we should be saturated like this. And then another one and another one. So the technology will continue to develop like this. For instance, if I apply this kind of concept to casting, solidification of molten steel. So this is an ingot casting and then continuous casting, strip casting, and then some other casting technology. So this is a, when the ingot casting was the case, most of the people are working on ingot casting, how to prevent desegregation and those things, how to make the inclusions to float up before the fully solidified. So that was the high technology at that time. So ingenuity. So to do this, probably you need to have some kind of ingenuity. You know who said this? Genius is 1% inspiration, 99% perspiration. Who said that? And then what perspiration? It's a sweat or effort, 99% effort. So do you think all of you are genius? Because at least you have a 1% inspiration, at least. Remaining 99% effort, you do, you, every day you put effort into your work. So all of you should be genius, but they don't call you genius. Why? There is another principle called Pareto principle. Sometimes it's also known as the 80-20 rule. This principle states that for many events, roughly 80% of the effects come from 20% of the causes. What does it mean? For instance, suppose there is a person, time, energy and an effort, 100%, he puts 100% of time, energy and an effort, and then 20% of his input, if it's time and effort or effect, sorry, the energy, 20% of his input results in 80% of the achievement. 80% of his time, effort and energy just gives a 20% of achievement. So you make an achievement, 80% of your achievement comes actually from 20% of your input. Remaining 80% of your input produces only 20% of our time. Why? Because of this. When you put 20%, the 20% of your time, you are very active and very concentrated and focused work. But for the 80% of the time, this is what you do. You don't focus. You don't put just the, oh, I have to do some computer game first. And then I have to do some research. I have to read the paper. Oh, it's a headache. I have to go toilet. And then one cup of coffee. So you don't concentrate. I don't know you, but I know my children. Okay, there is another case. How about this? Input, 20% inputs produces 80% of outcome. Another 20% also 80%. Another 20% 80%. And then he is 100%. This person is a 400%. When Edison says 99% perspiration, effort or sweat, he actually probably means this kind of effort, perspiration. Not the effort you are putting, not the sweat or the work effort you are putting. I don't say all of you, but some of you must be genius. But I would say not all of you. So when Edison says 99% perspiration, then he probably means that this kind of effort. So everybody can do this. This is 80% of the people are actually in this form. This is what we say in Chinese letter, but we know what it means. Is it the right English? It's ordinary person. Not any GIFT members. You should like this. This is a bean sprout. This is a bean tree. Bean sprout and then bean tree. But both of them comes from the same bean. Why it becomes bean sprout and becomes bean trees? It takes a different path, a different way of going. So if you really like this one, you have to choose this path. Not this one. If you choose this one, you will become a bean tree rather than bean sprout. I don't say which one is better than the other, but you have to be determined. Don't just move back and forth like this. Then probably you will produce some kind of hybrid sort of thing. Now this may be the last one. The function and the roles of GIFT. GIFT is a very unique kind of a unit at the university. So you know that every department has its own graduate program. Material science and engineering department of this university has its own graduate program. Mechanical engineering has its own graduate program. But there's still some different type of graduate program like GIFT and then information technology graduate program, wind energy graduate program. So there are, I think there are about 10 different special programs in this university which has only graduate program, not the graduate work. But GIFT is still different from all the other graduate schools. Not all of, most of the graduate schools. Because this graduate school offers not only the master's but also the PhD degrees. Specializing in steel. So this kind of a graduate school is called, I don't know how to call it in English, but in Korean we call it Jeonmun Tehawon. How do you call it? Do you have that kind of a graduate program? We do, but don't have a special name. Don't have a special name. Okay. So medical school. Okay. Medical school has only graduate program. Low school. Only graduate program. A business school, no one in the graduate, is a graduate program. GIFT is in the same kind of line in terms of organization or level. So the GIFT has its own unique mission and an aim. Because industry production capacity. So the industry, let's take a postcode as an example. Postcode produces steel, number of different kinds of steel. And then the postcode also has its own R&D capacity. They have owned the R&D research center. They provide some applied technologies to them. Probably they are not very much interested in the fundamental research. Because that is not the way they should go. So what they do should be, you should find immediate application to the steel production. Now the GIFT, they're sitting here so that it's produced some embedded technologies to them and then supporting technologies to industry R&D. And then also it produces a kind of breakthrough technologies. And then also very importantly, GIFT should produce high caliber, well trained men power. Either masters or a PhD level. So the GIFT has to have a very intimate relationship with industry. Like medical school and law school and then the business school. They have, for instance, medical school has a very close interaction with hospital. In fact, most of the professors at medical school are medical doctors of hospital. Law school, many professors in law school has good experience in the law practice, legal practice out there before becoming law school professors. And the school as well. So they are very close interaction with industry of their own area. So the GIFT should always keep in mind that our outcome should be useful for the steel industry. Otherwise, the GIFT cannot find its foot ground. Because out there in post-ac, there are many departments that have taught their own graduate schools. I'm sorry that the, who are not Korean here, it's particularly to Chinese students. This is China. This is the Chinese two letters which Chinese people call themselves. This is the center. And then this is a, I don't know how to interpret it, prosperity. Center of prosperity of everything. Culture, science, technology, power. So the China has been the center of everything. And then there are many, many people living around. So unfortunately they call all these people a barbarian. This is the Chinese letter to indicate the barbarians living in northern part of China. Western part, this is a Mongolian. This is a Tibetan. This is from Vietnam in that area. Southern part of China. Unfortunately, this letter is used to indicate the Korean. And then this letter in China they use to say the metal. So when you combine these two together, metal and Korean, and this is the steel made in Korea. I'm not joking. This is really the letter for steel in China. But they don't use this letter these days. If you look up the dictionary, so you can find this one. And then it would say that this is one of the old letters for steel. So this seems to indicate that our ancestor in Korea, Korea used to be quite good in steel making. So those Korean guys should be proud of being students in GIFT. And then all those students coming who came from other countries, you should be proud of studying in the country which used to be very strong in steel. Thank you. Thank you very much. Thanks so much. I'm very appreciated. Rather say very moved by your passion in the steel. Thank you so much. I think it's a very good chance to not only the researcher about the life or anything. If you want to say something to build those at GIFT, any question, any comment is quite welcome. Oh, you start again? Okay, please. Awesome.
A elegant lecture on the creativity and thought processes needed to nurture the already vibrant steels research and development. A lecture by Professor Hae-Geon Lee of the Graduate Institute of Ferrous Technology, POSTECH, South Korea.
10.5446/21303 (DOI)
Det här paper är också koal för by Evangelos Mirles, som är med mig här och han har gjort mest av det. Våk om emulsen som också på simulation om kallar, renderar och andra saker som jag vill referera till. If you start with. Em. Gestern i Wendjur, Wendjur, Wendjur, Wendjur, Wendjur, Wendjur, Wendjur, receptionen om tjus. I evening just så hall och hallgram och several holograms by inakki en del. One of them in particular was a blue vase over a white vase with blue decoration. If you had a look at that hologram, as you know, was created by upset och kallar teknik, but it represents what we need to achieve in holografy for direct recording. The reason his hologram is so good and so low noise is he only use red light, red laser light and then swell and shrink the emulsion to create shorter wavelengths. Em, vät, kreats en image vät, hos vän, i Lund, Ois, bekostar, hos Never been en skattad rekordning i nytt, ju sin blö light, för exempel. Of course, is process is very difficult and also it doesn't really have any direct relation to the colors of the object. He could as well instead of the blue decoration, ha värt grön. One and it will look equally similar. So in this paper I am only talking about color holograms that are actually recorded of an object and reproduced as weather color, rending in as accurate as possible. What the goal is here that the human eye should not see a difference between the object put behind the clean piece of glass and next to it, we put the hologram and then you ask people to say which one is the object, which is the hologram. When you get the 50 50 distribution of your have reached the goal. OK, so this other holograms that in Akigaeva varigod presentation yesterday is of course very beautiful technique for artists to work. But of course it's not really what we are looking at when we look at recording holograms av av objekt. In this category of course also now include all the computer generated holograms. For example the one set you either record with. Enligt the type of photographic input digital or not into the hologram when you record it and then transfer it digital to a print like this. Yola or completely computer generated holograms full color. Of course in that case you can't say anything about color rendering because there is no original object to compare it with. So that is on the borderline of course. But of course it's an extremely important application and technique. If you look at the first holograms that were recorded in the 60s. For example the left one here is the first transmission hologram. It was actually a slide. Two dimensional slide recorded in a hologram. And the one on Linn Lubeinko is one of the example of color reflection holograms recorded in the early 66 mid 60s. And these of course was just to show that there is a possibility. The right one is only recorded with two colors red and green. In the beginning what we had was holograms recorded. And this something we discussed before a multi layer now. But in the beginning there were no material except dichromated gelatin that could record blue. So what Kubota did in 1986 was to combine a slag. Agfa red sensitive plate for a red record and the green and blue was recorded in the DCG plate. Sandwich then together get a fantastic high quality hologram and that actually proved that color holografy can really be achieved. But in this case it was in a sandwich again a very complicated technique to do it in a more direct way. This probably is one of the best color holograms recorded. I ask of that date. Slavisk introducer. Introduce a punk romantic emulsion. It was an experimental emulsion that I got from Sobolev. And that me and and Vukachevich were working on in trust. And we did a lot of tests there to see if we can record in a single layer high quality hologram. And what we used was different test of but also this was the first object we recorded. As you see there we actually put a blue background which is very rare. The people who make holograms and to prove holografy. They use always red or green background. But we wanted to have a blue to show and we wanted to have white in the hologram to show that we could record these type of images. Chantier in France have developed an ultimate emulsion which is also extremely fine grain. And here this is an emulsion that you can't buy. It's certainly he produced emulsion monochrome emulsions that he sells but not really the high quality color emulsion. Så that he used for himself to record holograms like this which is also a very high quality recording. Unfortunately he tried to avoid to use any blue or blue backgrounds or white backgrounds. He preferred to have primarily red and green which will of course make a very nice looking hologram. So let's now see what we need to obtain a color hologram. Först vi ni till och att vårt wavelength till ju säcken. Vad typ av sätt upp för det var typ av material. Och också vårt lätt sås sådär be used till illuminat. Så de sådär får faktor. Alla av dem är extremt important. And yesterday and on Tuesday we discussed a lot of the book recording material. Så jag är inte gå in och kavar det här. Jag referterade previous paper by evangelos. Så. If you look at the laser wavelengths for holograms. Most holograms are not recording call the ones we make here with 3 wavelengths. But what we did a few years ago was to create a computer program where we actually looked at what should the wavelengths be and how many do we need. And this was something that evangelos Mirlis did. Who is my PhD student. We are work with the best test target. That's another thing that people in holografy seem to avoid to do either because they are not sure if the holograms are good or not. But definitely that is what the first person recording any color. Alla vem som djus i stapp av objekt. To start with. Here you have a saturated colors at the bottom of it and mixed colors and black and white stripp där. We were running with this program between 3 and 300 wavelengths and somewhere located between 400 and 700 nanometer. And what we did is was we wanted to see how many life sources are needed to minimize the error we make between ett given patch. Så för exempel if we use for example color. Her. Vattor vi measure in a certain type of light. With a spectrophotometer and get the X Y coordinates in a chromaticity. Den går det. The same thing with the hologram and see if you obtain exactly the same values. And if you don't do that there is an error so the eye will see a slightly different color. And of course this is nothing peculiar with holografi. It's the photo everything you even color photographs are not at all identical to a test of it. You always have problem with color reproduction. If you look at this graph that was a result of the simulation is that see that it's an exponentially declining curve which the more wavelengths we got the finer the smaller the error will become. But between four and five or so seems to be optimum to record it. Så vi ni får två färg lacer, väglängs till minimal kallar rending en error. Så tre lacer, väglängs, vill neva get a perfect. Rekordning. And also it's interesting to see that this is not. Generally identical for all wavelengths. Så if you look at these two, these graphs here represent the corresponding patch in in the test target. Så as you see here when we have three wavelengths, we have a very large error in this one for patch. Very, very small in this one for example. And the overall error of this was what we were looking for when we plot that graph. If you add in the fourth wavelength, you now see that the overall error is much slower, but we still could have some particular patch that this one is very badly reproduced. And if you go on like this, this will eventually drop down to zero with infinite amount of wavelengths or like a hundred or so probably. Här, you can see another problem. The left one is a photo taking of the test target illuminator, where RGB lacer light and the right one is a corresponding color hologram. The left one actually with these three wavelengths represent what the hologram should really look like, because the hologram will only see these three wavelengths when you record it. But so this is also different from the target itself. So just put the target in front of three laser wavelengths. It doesn't look identical to have, for example, a normalized white light or daylight on it. So all these things are problems we need to face. Most of our test are the Dennis Jürk recording technique. Of course, it doesn't have to be because you can do transfer. You can do a master and copy it into a second hologram. But most of our are using the Dennis Jürk, because first of all, you get 180 field of view in both dimension and full parallax. So if you are looking to make very realistic images, that's the way to go, because as soon as image disappear, if you don't see it from all direction, then you know that this is not a realistic, not a realistic reproduction of an object. This is from our lab here at Center for Modern Optics. We have two rooms. One is a laser room here, where we have the three lasers that combined the light. The light from these three laser with the acrylic mirrors that were shown in the other. Maybe you need to see. So we have the krypton near the Miam Jaggen argon. They go through the beam combiner. So that add, first the red go through and we add the green and we add the blue to white laser light that goes in to another room where we do the recording. So here you see the white laser beam going through the hole in the wall. And here is on the other side where we have a huge table on which we record our holograms and here you can actually see where the elephant has targeted setting with the white painted plate above in white laser light. Then we will of course talk a little about computer generated holograms. It's not possible to generate a lipman plate with a computer with the interference pattern that you need. So mostly it is connected with either holographic stereograms or you have now the technique of generating holograms pixel by pixel and print them as anpel av both what Sebra imaging doing and the Yola technique. Där again you can look between transmission and reflection and transmission can also be done with this techniques. But as you know, as soon as you move up and down the colors change, which I don't think is a technique of interest when you want to have absolute perfect color reproduction. So that's why the reflection holografi is being used. As we have already said that and I'm not going to talk about emulsions now because that was a topic of the previous paper and other papers in this conference. These are the three potential materials for holografi in color. Fine ultra fine grain silver helad fotopolymer och dichromated gelatin Kubota, for example, har Dansam excellent work. Also in Pankromatic dichromatet. Has been red sensitive and so has Jeff Blyth. Now we come to the light source we need to illuminate them, and this is very critical for color hologram, because you need to decide when you record the hologram, what is going to be used to review it. So the color temperature of the source has an influence on the color rendering and the source size of cost determines the image resolution, which is not unique to to color holograms. It's tall holograms, but there is another interesting things here with the reference angle, which is not of importance to monochrome hologram, but we need to guess exactly the right angle that was used for recording the hologram assuming there is no shrinked or swelling of the emulsion. That means that we need to have that to get the right color reproduction. For example, if you have a color hologram and record it in more replay it like this, it goes from green orange or red through white and then to green. So you need to be exactly at this angle you used. What we are looking now is we talked about it yesterday and Craig Newswanger also mentioned about LEDs and OLEDs and other things. We are now investigating these type of sources for also to use for large format reflection holograms. And all of you know what improvements we can get with this. Long lifetime and small size, high durability and very little energy used, no infrared or UV output in the beam, no print out or heat that will destroy the hologram. And all this, but and also if we can actually match the three or four whatever we will be using for color, exactly these wavelengths in the LED source. So we just hit what has been recorded. We reduce all the extra noise that is created from a white halogen light when you hit all the other parts from the spectrum or hitting the plate creating noise. So a hologram illuminated with a perfect match of wavelengths from the LED with the recording will be extremely nice looking holograms. And the people as of today that works in in. Mm. I mean, this is just the most important ones. I think in there are some others I am sure that is not on this list that are also recording, Kalle, holograms or in more in a more consistent way. Vi, of course, which we have been for a long time, Kalle holografik in United Kingdom. Dainippon i Japan har gjort mycket av fotopolymer. Jola, som du kommer att höra i den här paperen, är enormt involvederad. Vi har också paper från Lund, Institut för teknologi, och Sven-Joran Pettersson, lite längre idag, eller i dag. Och vi har ultimet holografi Frans Chante, rekord holograms i Frans. Cibra imaging i USA har varit involvederat i en lång tid, producera kolor holograms på fotopolymermaterial. Och också för Jola är X, Y, Z imaging i Kanada en del av utvecklingen av alla de silver, hejlad, printning, machines och processering. Om vi nu tittar på Cibra, de har varit med med Dupont fotopolymers och är nog, jag tror, men de kan också börja kolla på silver, hejlad, för några applikationer. Det är baserat på min klubbens, som jag har jobbat på i mitt tid långt tidigare, där han utvecklade den här tekniken. Och du har ju sett om det här i andra presentationer i San Jose och hur paper har varit presenterat på den här tekniken. Så de gör normalt de tajlar 60 x 60 cm och då tar de ihop att göra lager holograms. Jag vet inte om det har skett eller om det är det som de gör eller om de printar. Det depends om hur lager format fotopolymer Dupont kan producera. Här är några exempel av det här. Du har sett att Craig har brått några av dessa mappar som är extremt bra att visa en ny viss vis från över, en full kallar, där du faktiskt har full paralleller så att du kan åka runt och se ut. Du kan även se olika kvars med olika kallar i parken. En annan exempel av det här var en porträtt av Steve Benton som var på display i memorien av Steve Benton en par år sedan i USA. Kallar holograms i London är också primärlig transmissional holograms där de kombinerar. De har gjort postar för film och sånt. Det här är en jag tror är från en film. Jag vet inte vilken det är, men det är en så de har haft dem i film, teater, att promocera film och många andra applikationer i avdelning. Daim Nippon i Japan har jobbat exklusivt på att produera kallar hologram på Duponts panchromatik emulsion. De har två olika bilder, en är det kallat tru-image som är mer för dekorativ holograms och en säkerhets-image som är i säkerhets-business att göra fullkallar säkerhets-holograms till fotopolymer. En exempel av dekorativt holograms var den här watchen från Franklin Mint där de inkoperar ett fullkallar hologram av en dragg. I den övriga och liten dragg kollektorn. Jola som jag kommer att höra i nästa paper om den här tekniken. Jag vet inte om du har sett en film här, vi har en rum också i berget. Du kan komma och se att du har sett den hologrammaren på entrangen och många av de hologrammaren här. Så jag kommer inte att tala mycket om det. Alla jag vill säga är att igen, materialet är extremt viktigt. Så Jola var för att utveckla en ny film med fina graen och det är kallat av Cefra S, vilket är utvecklats och producerat i Moskva. Det här filmen är den här som du ser här av de här hologrammare som vi har alldeles på den här typen av film. Nu kan vi tala om den fritidiga kallarologrammaren. Så det är det som jag ser och jag vill också ha ditt input om det om du tror att jag är rätt eller wrong. Men först av allt vi måste fina hur många wavelengths vi behöver att se. Unförtligen har vi inte sett att vi har varit. Allt som har gjort test med förvägling. Alltså har jag aldrig sett en hologrammare, men det kan vara bara en simulat. Alla är så vi kanske måste svåra till förvägling om vi vill ha bättre kallar och produktion. Och eventually 5. Vi vet först om vi är använda en silverhäljare, de måste vara mer än 10 nanometer. Så det är något vi måste fortsätta att jobba på. Världskallarimagerna behind de hologrammare som idag representerar den mest realistiska bilden av en objekt som kan vara rekordat idag. Det sista är att du kan få distorsioner för att blöra från spotlätt som gör dem inte riktigt rätt. Du kan också inte ha perfekt kallarendition. I en del kan du ha andra problem med bilden som inte ser identiskt till objekt. En sak jag inte har sagt, men som var förvägla av en paper i Strasbourg där de utvecklade en komputer simulation av att skapa en vägling efter den andra att se vad det effektet har på spekul. Och, naturligtvis, de mer bilder du har pågått mer rekordningar du gör varje gång du får en avdelning av spekulpattern. Så de kommer att överlapp lite olika och eventually de kommer att bli upptäckt. Så vad du kan ha sett i kallarenditioner kan du inte ha sett i kallarenditioner, utan att du ser några stort saker som skapar på pläten. Så de mer väglingar vi skapar, de mer lättare pläten blir kvar. Fri av en nyutomring eller andra saker som skapar ofta i monochrom en singel vägling rekordad hologram. Och, naturligtvis, om du sticker till den kallagramm för rekordning objekt och så, så då den externa fielden du är så, of course, till det illusion av lukkning av det rena objekt. Så de var från kan faktiskt rekriva det lite skattet från det objekt, djur i rekordning av det kallar hologram. När everything works well, sport lite material, de monta av vägling just. Så de i en konkursen med det, det är ett teknik being perfected. Vi fick det direkt i rekordet kallar har många applikation in display, unika och expensive artifacts. Avkost, alltså kommersial avligsning, en point av purchase display, advertising, juller i Windows, avkost, var de remove everything in the evening is, of course, var en midget. Possible to show all their products continuously in the window. Also in airports and everywhere else. You don't need to put any expensive stuff out there. Just the holograms because you are not allowed to touch anything anyhow. So why do you need anything objects there when you can get exactly the same illusion to the viewer outside the shopping window. And the color, computer generated hologram, will of course have an off also various large impact on what is going to happen. In particularly in the field of rapid product prototyping, computer art and 3D visualization, since we can generate images of non produced objects that may in the future be developed. So you can look at them in full 3D, in full color, long before you have produced them and if they look attractive or not. And thank you for your attention. Any questions? Emanino Stjuken. Craig. It's a comment. Really, this is a little anecdote from when we first saw Kubota's work in Montreal. A group of us were looking at it in wonder. Most of us, we were all familiar with the situation. We were all looking at the hologram. And we noticed that most of the lay persons walking through the exhibit walked by the Kubotas hologram without a second thought. They looked at the rainbow holograms and they, oh, this is wonderful. Look at how strange it is over here. They were enamored with the artifacts that we try to get rid of. And Kubota's hologram was a doll. It wasn't a hologram of a doll. It was just a doll. And that was kind of enlightening at the time. So to say, the better we get at it, the less people will notice the process. Yes. But I mean, that's what you are after, really. I think so. Is that's what you are after, really. I mean, if you look at acoustic wave reproduction, nobody will say that this sound reproduction of concert that you listen to at home is too good. It looks too realistic, too. It's like we were sitting in the car. It needs to look like Eddison wax reproduction. I don't believe that making it identical. That's absolutely the goal of all imaging reproduction techniques from the beginning of photography. You went into color photography and so. So I think that it's just wrong to say that people will not like holograms that are identical to the object. No, no, but I mean, I know people sometimes say that what is the purpose of making a hologram if it is identical to the object. Then you can say what is the purpose of listen to music if it's identical to sit in a concert hall and listen to it. It's exactly the same argument. Anyhow, there was a mark of how the question. Hans, you didn't mention the earlier work done on the selection of the lines. There was a work done by Boris Rufan and 72 when he first used Xenon. Yes, yes. But Paul really looked and very carefully. Both when he was in the Cambridge or Oxford and then with MIT. What lines should be used? And it was a great work done. Yes, I did not. What I did here was pick a few things from the history. If you read my other where I have the history of in other papers, I have all these in the list of references. So they are in other of my papers. Kevin. Just a comment on wavelength selection. I like to suggest that if you pick the right wavelength, I think that three wavelengths might be sufficient and even better than more wavelengths. If you look at the what Thornton, Thornton's paper you will be familiar with of the 70s where he took what's now called the three prime colors, the 455, 146, 10. He found that the color rendering from those three was better than any other set of wavelengths even if you have a continuous spectrum. And sometimes if you add an extra, for example 488, that actually reduces. So it may be if you pick just the right three wavelengths. Yes, but that assuming that you have more broadband wavelengths. Heselink, no, no, wait, Heselink showed that you could with three wavelengths get sometimes it was, and you see from our simulation with three we could get one of these patches completely wrong. And that to reduce all of them, I don't see, we can ask Evangelos if you have any comment on that because you did all the simulations. But anyhow I think we may discuss this later because we have to, I think did you have a question or no, Michel. No, Evangelos you can. I think so. The color is in the mind because when you are looking at things during the day, when the sun is low in the morning on the afternoon, it usually looks much better than when it's high. So I mean the problem of rendering the true color is very complex because you take an object and you don't see him the same in a room with a lamp outside with the sun when it's so cloudy. So it's very complex. So I think you gave a very good abstract book. We could discuss this much longer. Yes of course. Yeah. Problem is the coherence length of each of them is maybe not the same as the next one. So this was the case with Xenon laser of Turcano. The coherence length was not very deep. No, you need also long coherence of course. Yeah, I would think the most obvious killer application is portraiture. So the question is what, how many wavelengths does it take to do good skin? That we have not really looked at. You have seen some, I mean what has been done far like by Sebra, for example, portrait, our portraits, that's what we get. But if you look at a Lipman photograph like my portrait there is a huge difference in color rendering when you have infinite amount of wavelengths like the full spectrum. So you, to rendering the color of the human flesh or the skin is very, very difficult. It's very complex. I think we can only take one more question from. And then we, I want to go over to you. I was very impressed by the excellent result of Seidrkollar Reflection holograms. But I believe we should, in the future work, we should think about the combination of Seidrkollar holograms and the holograms with different lasers. Yes, yes, of course. Anyhow, I think we have to proceed now and there will be more question after David Radcliffe's paper which is related to this. So we can take more questions like that. Please, David.
The state of the art of colour holography is presented. The laser wavelength selection issue is investigated through computer simulation, showing that more than three wavelengths are needed for accurate colour rendition in holograms. In addition, the recording material is very important for creating high-quality colour holograms. The demand on the material and suitable products currently on the market are covered. The future of colour holography is highly dependent on the availability o improved panchromatic recording materials. Recording colour holograms, either directly or by computer, as well as digital printing of such holograms are mentioned. The light sources for displaying the holograms are important. Small laser diodes as well as powerful white LEDs and OLEDs with very limited source diameters are important for colour holography to improve image quality over today’s commonly used halogen lights.
10.5446/21305 (DOI)
Ladies and gentlemen, thank you Hans also. My name is Sven-Jaran Petersson. I'm from Lund in Sweden. If you don't know where it is, it's close to, rather close to Copenhagen in the south of Sweden. I'm working on holography both as a researcher and as a lecturer. Unfortunately, most of the time now is lecturing, optics and such things. The outline of my talk is the following. I will say something about history. Well, already Hans has said several things already, but I will just take some few. I will say something about courses in Lund that we have. And I will also go into the equipment we use for color holograms. How we control the exposure. I will show some pictures over the holograms, but I have also here some pictures that the students have made. I will go into some teaching of holography too, so it will be also lecturing in a way. And I will also say something about pseudo-color holography that we have here in Lund. So already in 1964, Lithon and Patniks proposed a way to do color holograms, but the problem, well, in that case, they said they would use three different beams because of the problem of crosstalk. But this means that we have a rather complicated setup. So in 1964, Penning, Thon and Lin, they used instead a small bandwidth of the material, and in that case they could produce a transmission hologram in color. And Lin also used in 1966, Denizjuk holograms to make color holograms. And in this case no lasers are needed for the reconstruction. And Hans also told about Kubota, where he used dichromate gelatin and aqua plates, and the dichromate for the blue and green and aqua for the red, because the material at that time for blue light was not so good. It wasn't mentioned that you could also do spatial multiplexing and the coded reference beam to obtain a color hologram. And as some in the audience noted, there was a rather important step forward when Jubel and Sollimer used a new material from Ilfau, and they could produce good color holograms. And we know that Bjelkagen and Buchu Tjevis, they have used single beam reflection holograms and using the western processing technique and also Russian emulsion. This combination was a very successful thing. All right, in 2000, Chante presented a new emulsion, a French emulsion, ultimate. And which was very good in small size crystals. And maybe I can say something about my experience. My first experiments on color holograms were in the beginning of the 70s, where we combined eight E75 Bragg plates with blue plates, eight E56, for a transmission hologram. And we also, well, at that time we used a Heaton-Cadman laser and a Heaton-Neon laser. And of course we had the problem with scattering of the blue light, but the hologram was quite good anyhow. Later in the 80s and 90s we tried to also make three colors. This was two colors. We tried three colors with Healum Neon blue, Healum Neon green laser also. But it wasn't until we could use Russian emulsion that we got a very good step forward. One important thing in this case is of course the CIE chromaticity diagram. And Jubal and Solyma, they found out that the three points here, 528 and 647 and 458, would be better than the combination that was used mostly at that time, 514 and 488. 488 and 633. And we used, as I said, Healum Cadman, so we were a bit too low down. I don't think it's really necessary to cover all the blue gamuts. The courses we have in Lund are from Nanotechnology, Electrical Engineering, Mechanical Engineering, Industrial Economy. They make a four hour lab in holography. And we have also two separate courses where people or students can go deeply into holography. And I can say that last year we made about five to six hundred color holograms in the setup we use. So I will just say something about the equipment. In fact, I presented it on the 2000 conference in St. Pölten. But there are some modifications. So we have now, instead of Healum Cadmium Laser, we have, as you can see, a cobalt blue laser, which is going at 473. And we have Healum Neon and also a normal Diet Pump Jagd Laser. As you can see in this system, to be able to adjust the beams correctly, we have two mirrors for the blue and the red laser. So we can exactly overlap all the beams in the spatial filter. You can also see in the picture there is a diode laser also here. And we can enter that diode laser to test what happens if we use this. You can also see that it's possible to expose larger plates in a separate setup. Here you can see a picture of the system. To the left, the blue laser, the new blue laser that is now this size instead of something like three meters. The output is 50 milliwatts instead of 125, but we can cope with that. And on the right we have the other part. The blue laser doesn't have to be outside of the table, but this is just because we have not yet had the time to move it onto the table. This blue laser is in fact a laser that is produced in Sweden by Cobalt. And it's just a diode pump laser, which uses another crystal than the normal for a Diet Pump Jagd Laser. One important part in this setup is this plate holder that we use, because we want all the students to have their own hologram. So we need to do eight holograms in a four hour session. Well, of course we cannot only concentrate on color holography. We also have to discuss interferometry and such things. So we have a special plate holder where we can put in four holograms and expose them sequentially. So just turning on here you can rotate the system. This is the electronics and you can see here the plate holder. And you load it from above here and then rotate, and so you have four different positions. And in this case we also have an exposure control that is computer based. And we have, this is typical example of what we have as exposure times. You can see that plates we use, well now we use Chante plates, which are very clear and good. And you see they are not so sensitive to blue, so that is why we have 42%. We use a total exposure of 1500 microeulospa square centimeter. And you see the time of exposure is 39 seconds because we start the different lasers at different times. So the total time is just 39 seconds. But there is also waiting time because we rotate the cylinder, so the plates. And the waiting time plus the exposure time gives you a total exposure of time for exposing the plates of 10 minutes. And then we have a new group coming in and then we develop all eight plates in the same way. I tried to use a diode laser instead of this helium-nion laser because I can find that the clearance length is not so good for this depth of object. But we have had several problems with mode hops and fringes, contour fringes in the holograms. This is just one color. And I would like you, I would be pleased if you could tell me of a good diode laser, not too expensive to be used. So this is the plate holder for making larger size 20 by 25 holograms. And I have some holograms over there made by the students. And some, we have also 20 by 25 centimeter hologram also. I will just say something about teaching holography. I mean, when you see a hologram, of course you immediately understand this is a three-dimensional image. But why? Well, that is a difficult task to, if you have a student that has no background in wave optics or such things. So then we use different models, a simple model and more and more difficult models. We have a mirror model with moiré pattern, momentary electric field gratings and for some students complex description of wave fronts. Complex description of wave fronts and the reconstruction of that. Well, there is one very simple way to create a three-dimensional image with a depth of, in this case, some tens of meters. It's in full color and it's also in a way a video. So why are we on the conference then? Well, all of you know that it's the recording and the reconstruction of the image. That is, I mean, like a photo you reconstruct the image so you can have something stored. Well, we can use this mirror as a mirror model. We can say that the hologram can be seen as a number of mirrors and at the top you know that an object is, if you have an object in front of mirror you have a three-dimensional image behind and for a distant point you, like A prime, like A here you get a distant image point and for a nearby you get a nearby image point. And so you can say that the hologram acts as a, that you have a lot of small tiny mirrors inside the layer and for instance you can create the waves or the beams from A prime by just having some positions for the mirrors and some other positions or angles for the mirrors. Yeah, you will have a more divergent beam and that gives you the three-dimensional information. Well, we have in fact a real model of this. You can see here is a model hologram with 625 mirrors and there are three screws on all mirrors and so you see on the screen here in front the image created by light source on top of the mirrors and so you can, we can say that in one position we have one image like here and when we change the position of the screen we have another image. All right, we heard in the first day something about Moiré Préton and we know that if we have two waves with a velocity of the light even though they are moving very fast there is a stationary pattern and this is what we call, that is what we record in the hologram. If you have another angle between the beams you get a finer system and well let's now have light on just this hologram. Well, I don't know if you can see but produced from the, I'll put some there but you can see there is a system coming out from the Apollo here. You see it? So this means that if you illuminate the stationary pattern you can get a new wave and of course this was very simple and normally we have spherical waves so if we have a spherical wave and a plane wave we also get some stationary system I mean if it's not coherent this will not be stationary but if they are coupled in a phase you get something like this. So this, if you record that system or the hologram there now I cannot help you really but maybe you then can see that the stationary pattern and the plane wave creates a new wave, a spherical wave coming out from the pattern or from the hologram and of course if we do it the opposite way you have a convergent beam. If you didn't see that clearly I think you see it more easy this way. You can also use momentary electric fields so momentary we have also a program on that. If we have one set up like what you have there to the right from, okay now it doesn't work. Alright then we can go back here. Then from one source you get this and from the second you get the other B here and C, you will see a fluctuating electric field but there is always a constant pattern behind and that is what you have at D and which is the hologram. Okay I think we can also of course use the grating model and calculate the gratings and then in this case you can say that the hologram is a grating where the grating constant isn't constant but changing and so you can show that you really get the three dimensional image from that. If we have students that are well aware of wave optics and if we for instance do some experiments in digital holography then of course we need the complex formula of wave construction. I will just say a few words about pseudo-color holograms that we have on some of our courses too and the system that is to the left was introduced and used at the course together with Marley-André Cossette. I don't remember exactly what year it was but we use glass objects in front of the film and this is very good, I mean this was very popular for the people interested in art and you can to the right you can see the slit system and the plate holder where we use film and we just use vacuum to hold it and to the left we have some typical objects placed on a glass plate and I had a visit by Melinda Menning and she made this beautiful hologram on this set up and to the left we have another from an artist Monica Bergre and she also she exposed once with no vacuum and so you have some fringes so she liked to have that in the picture and to the right we have also tried to combine color separated images to reproduce an art piece of Kerinsky and you can see quite well the good color rendering but of course then the viewer can change his position to get his favorite colors. I hope we will improve in our equipment to get even better holograms we think we the students learns a lot on this hologram works and so on and I have you can see some holograms I have some bigger holograms and the holograms there are typical for what the students can make. Right, thank you very much. Senor, we may have some questions. My time was out. Yeah but we have one or two questions. We will ask you a few questions about diet lasers. Maybe being involved for more than 10 years with Professor Pierre Boone well known here we firstly record reflection hologram using diet lasers in 670 tonometers it's experimental production of Philips you are asking about the price it was free. But now we are using in red region of course but in 630 tonometers wavelength it's the same almost as here in Nual laser it's very difficult to have single frequency. Usually in this wavelength you have two well stabilized temperature, but 658 nanometers and 672 nanometers is very cheap. We can turn to tall up the price of 50 milliwatts lasers but you need temperature controller and driver the price of temperature controller is about 1000 error with laser head and temperature stabilization is about 400 degree centigrade. I will give you detailed information because we check all lasers. Yes I think that is interesting for everybody so if you could have something maybe later we can mention that. I would like to suggest you to insert in your very interesting file about history because one period after invasion in Afghanistan is missing and papers about multicolor of Hariharan, Sebo, Serov and Sobolev, Serov, I am signed up and bellhagen. I mean we can, we have around 100 papers also on color log, if you read one of my papers where every single one really is listed but in this review mine and his we just highlight a few. Thank you very much for the presentation and we have to move on.
Several physics courses are given at Lund Institute of Technology. In the basic courses on ray and wave optics, holographyhas played an important role both in stimulating the students to curiously investigate the optical world and as an important visualisation of some obvious and interesting parts of the courses. To meet the need for basic laboratory experience, a holographic laboratory has been built based on three holographic tables. One large set-up is designed for making full colour holograms. Normally the students make holograms in the size 4” x 5”, but it is also possible to make holograms up to 30 x 40 cm. For students interested in a deeper understanding of holography we have another large table where they can make one step rainbow holograms on film in the size 30 x 40 cm.
10.5446/21307 (DOI)
Okay, I'm ready. So I'm going to explain some of my work through my home pages. This is Holo Center things. So to articulate what we see, I have some project exhibition at Seoul, Korea last year. So what we call a collection of landscape. So I'm using photography, basically my background is photography. So sometimes I'm using installation with some photographs. Also I'm using hologram somewhere. So those are what I work at last year. And those are photographs on the acrylic box is put together on the wall. And those holograms I made, Danish type 30 by 40, as flip it over, it makes a pseudo-scopic colors. So those are part of work. So it's framed on stainless steels. So. Okay. Okay. No, it's Danish type. So those are all pseudo images. And I done work a couple years ago, so exhibition at Samsung Art Center. Those are 24, it's about 30 by 30 square circle holograms. It's a transmission. It's hold in the hands like a magnifier. So this is some details what I have in here. Also I'm using photography as been for 10 years ago, the memory of the times. This is about one horse, perfect one horse. It's arranged that are gained by myself and then make photographs and print it. Okay. Sorry. So, for, I'm collecting, when I travel, I collect some items or some objects, get together and bring all items in my studio and I got some memory from each object, its rearrangement for myself, so and then make a photograph. So I took about, you know, 4x5 and 8x10 large formats, very detailed. So I have, I'm using some video medium, so this is lecturing point, this is about 80 monitors in the underground and put together, all these waters is rolling. Well this exhibition was, what with Namjoon Paek and Bill Bielar, it's got together, so I was very honored to be exhibit with those artists. And also I have trimming history, so this hologram I've done, it's about 10 years ago, it's a 60x80 Perth hologram, I work with David and then, this model is I found at the Tuania, she's, this model is, she has a long hair, so I explained to what I wanted to make a hologram, she's, she decided, okay I'm gonna make a bold hair, that she go to Babashop and then shaved all of her hair right away, so I really appreciate it. So I did some therapy work with this kind of work, so I print this person's in the box, it's clear, so I have a lot of books, so I surprised the shell is first images, that's the canyon, it's a light impression, that's where I did it, at the Antelope Canyon, this I've done is 20 years ago, when I was a student at the center of California, so I travel this canyon, I take pictures, lots of this, take pictures of this beautiful lights and shapes, so when I was 25 years ago, I travel here, take picture with the large formats, and the magazine of, I don't remember the name of the photography, but it's famous, a popular photography magazine, they published my work with those images, so. I took all 4x5 format with printed on Sivakrom, so I, okay, and then imagination, I've done this hologram at 1993 at Tejon Expo, the scale is 1.1 by 2 meters, 3 channels, rainbow holograms, I work with David, David is here, oh, it's over there, so it was very large scale of a hologram what I've done, so and then it's too slow to loading up. Wow. This was setting for those all things. So 3 channels for each one holograms, I made 3 different masters, all 3 different exposures, and final H2, and then I like to introduce a little bit school labs, so. So, we have, yeah, we have lab in Seoul, little bit. Okay, I think this is too slow, so I'm worried about to time, so I have 4 minutes, thank you. And also I did it for project with Korea broadcasting systems, so this, this, we have very popular historical drama, so this is hero of the drama, so I made a hologram of them from the KBS, so I brought some big large format of hologram of this actors, but I will show somewhere. Yeah, so display, so also I have some other project to producing hologram, but you know, as you can take a look at the Holocenter or KR or Holocenter, so you can see some KL. So I have some photographs, I took this photographer at 1991 at Lake Forest College with TJ Chung, so that's what I like to show. This is most beautiful shot with the benton and poladauson, and I have 3 historical persons here. And still Ishii is here, and Yuri Denishu, and then Silver Costains, Silver Costains, and Melisa is here, and also, I'll go a little faster. This one was the presentation of Yuri Denishu at 1991, but she's not in here, but she's at the step and she's staying at the Center for California. I study at Center for California, so I visit to her house and work some, yeah, it was a long time ago. Also John Ferry, so Dr. Markov here. I visited Dr. Markov at 1997, his institute, he offered me to stay one year's work together, very thanks to him. And Mary Herman, she's not in here. Also we have, okay, we had a very good time with the TJ Chung's night boat. And then Amy Litz and his wife. Okay, this is the last one, this is my grand-grandfathers. Thank you. Thank you. Thank you. Especially.
Holography Art : Art gallery, Collection of Landscape, Photo collection, Holography Lab, Light Impression etc.,
10.5446/21309 (DOI)
Išgalės ir laimti tikros Heiko pガrašus survives Kaip ir nes lookrėnius apliktės. Ašegį megn준ys ir viek stateskiunėts atpideikti sselciau dedans aukšinį projekšį. Aš nevis.藏ėjo kitas matėrytleų mathematics ruohtėjų vieturą egyaißy jų brolai A mountains Spokokės silentlyėj Kolleginnen Tikp抱inko važinio kaip daugiai užitos palinkaibyti, peržiaug味ant točio. Poiver ChinaEstis Peres towers iš Peresval metodos, Fuk möglichst訂o net pražikų metas deus. Green~? Pot tę ranį. Man rungas jas kai n watering kart분i, bet žiniai apirgiant vaiklintスarčiausias eranas clearsamu. 14 nm, 533 nm, 16 nm. Jei misbėrės atvyko tiem Jesus invasivem, neskuojo moteriją, ramenos skipluje, galutei Fierartenos beastiems nearestеловį. Tu debuted That material and a few words about why for these materials needed at all. Film drain as it was already said here is very needed. For color colorography because otherwise you cannot record blue colors,ament ar dėl aišianos, kur piratek 5s viena jumpos vėl o šantogenas manai sapestžiuomėje bakariai requirea leisant professional Why Falsas. Falsas caordinarybų+. N mashedantonėų. Spektraruorotai, po mes dotuotuosi, yra treniškių laarsų... Taip... ir Epik Listen B Aryflo ir Le Cheryl cashierit Listen B bater Stack, aš labai tai nes那個 ir kamtose atėjata tymedlaعį connects kai visi ziko brodėje bหน와 shouldną susikot differently. Aš kad to laimės 98 mehras, to duos kulėje posindenas ir per mano gerinės. Aš mano multi suju kauros labai, ir norimės ist ก generalizedižiangu ça إ poipperio. Visiстр απ mergl� piengasj消irliojose. Man gostis verอ invierę pēcksie jmeterio dėl download iš�� poractorį ASG. Tarygala b place lan Netисurtinie. Word nor крас Mes jus sėkernėryje, b lan met globe. ir futūrė saske Varžome priekti, tai tai ned reversy quantitative resultatė. Volintas k mentalityų. kh writingu직ų villas točio.arest helmetsorė per 7ų dominkaiti. Trebūžiau futūrė – džilausinia viena ikие sėt عليukse. Bugukas le sentųje ateikyti audienos kokos atorgimai ratedumis sprouts pak comicsorį. TComa Kardsheats, didar проблема radiator Aubžonės kre librarianasamentor bonint energy Jisena geburė ir mangalimegtikė pijas define gest intently Tondo Turbose plus 626 pvūko, ir patASGūranse болgada standardizedytiūse, onionatte. At Hungaryi savo turime dėltnė byrš cellos augdie creatorius?? Jie visi susikvas netootyko, diena savo videop manos heating ir kall trucsas. Autok reflective했어 sāk. Ž soldatą rungtynė, bet patinaikai prakšu o nežinkšymi pasiskunęs. Eski, komandos. pavo tiesčię, tiek gerpl pitavome changas. Ši Japτε, ir pibiuomi dolgį bličių. Vais M6, because this material was needed mainly for our post machinery, and as you know it's only M6 is working for post lasers. If somebody knows other developers, please tell me, we'll try. And after that drying, because we're using machine printing, we need quick drying, that's why we used alcohol drying. And what we measured, we measured quite interesting stuff. It appears that this material has very good sensitivity for pulses, and it has bad sensitivity for continuous wave radiation. I don't know why. And diffraction efficiency, again, is a bit bigger for continuous wave radiations. And there are diffused mirrors, so if we will speak about white light diffraction efficiency, actually we need to add all three values. So then it will be, for pulse radiation, will be more than 45% as usual, and for continuous wave it will be even more. So, and as I said, we got very strange facts, and these facts cannot be explained in usual photochemistry theory, probably we need to go to quantum, when you don't know what to tell, you need to tell that it is quantum. But maybe somebody will create theory for that stuff. I personally think that it is very similar to semiconductors that has some traps inside, and that trapping electrons that are emitted to conductivity zone. But of course this needs separate investigations, and really when the grain size is so small, everything may happen. And now coming to conclusions, there is a new material that is available. Material is suitable for pulse and continuous wave radiation. Material is produced on industrial scale in film, no plate, and probably it will be no plates, only film. And this material has good contrast, look for clever and good sensitivity. So, probably it is the best material in the world now. Thank you for your attention. Questions? Aplaudov Leslie? Es Жatoriyais dėl peraudė pitsčia iki pagravo, meuтė personaje... Oder turime tie делать Abangą ros adjective. Janra gerusedo tr SYI sise nėra. Jis taešaltiną to pasalėšME tiresties, dienas turime assas misfранės. Aaa,ersonруigus yстав langę paras ka级 brojam ir škova š眼vancedau. Bet... sužodėme eizalais komandetas Jieysimės atveikente pulsed αυτą kutspi Services Imap wires equalis kartą,니 tiečiamės rezultacijosi Iš deniedatėj, kas nuvojaat earlier thani iš Cisco puisqu per Athena Blasio sveidėjome Žeologramsi Dėl nevykote ar ja Ezina undėl, kur53 conservacijų Sveichus ir pišti š mateix AR communities Off when it's a �nden Nazi Z sides Nig 티vol's pr….Рершėje ingenis Man Move tai metu apie 60 burstos reizjū uranium dancingas į dirbetau pa turningitever Ir jokis terpšininkas prieku, kuriu toRItas profžės te durability wyro, k Doctor Daniel Ninaro. Jeigu daug prasysi lasė ant soliniu re pretendinai ir soru ačiau tvinkimi pasik Cinnamonas leipimtų kendis. Fabsv Mangotino dalyzė imėjo nematimas prieku. tralisto cruising tryp twenties dig universalis tehrosoma kristusinti žaidimero. Tai to laimėti sėgywechselėme aplinkant, kad nekaip SEIko n!.셔서 taškina liku�ų? Maimo dirty visitas k Battle for theanya? Ad, te να jausiuasis ir aphančiau microwaveu dalemati maikyti su rimtėjis, doličiau pirmoje vaikią net boxes Möglichkeiten gan laiko oh contrastuas. Bet tik matavo romsulių deλεinkamingas, per recklesso circuito turime isvanas Fitusteras. I mean it's maybe within a minute. I mean it's a dramatic exponential thing. What's the general nature? On one occasion I was attracted to a calculation based upon the silver hairlide of the emotion being in small spheres. Aizsidomės r engines ir kas nei kojo kumpot Spiniera per tyklinėjo ir nutenkėjo s方ą. Pure staying outside is a good thing. Most can not him be misspawnado really. Kaip irege proje kur atvežauj� penky marketiais neg lenguoje, karoje būט 신 visos besפ없이 nėra kמה ir daite reci leila kitą aprexìnhai. Arbeitugai mentioningčiai Football at sera link radio. Planėjau Buddha. O сколько smutur jau gaugau netešveku dien kangiteis prieš pagalingas? Yra troje kontrelčiose ir gerai 후�ė мышlėk drankinius? Patrickas prieš tvarų jie Įtižiothuų, mūsų darba rumiausiai pr merchantų labai mums ito, paš Requiem bėgsyms. Tači vietoja, mano rung Huiplingai at Leslieas ir rungučiau automatisbūti. Mes žailiamuそ čia tiesai jinas?
A new ultra-fine grain silver halide photofilm for pulsed colour holography has been developed. The film is now being manufactured on an industrial basis and is commercially available. The photo-emulsion used has an average silver grain size of 10 nm and is sensitive to emissions at 440 nm, 532 nm and 660 nm – the emission wavelengths of the pulsed lasers commonly used in modern digital holographic printers. In the article we present the basic characteristics of the new material.
10.5446/21315 (DOI)
Good morning everybody. In a few words I can tell that Swissgram is a combination of different technologies which is possible to put in one hologram and under one roof in 3D AG. But the question that in many cases when you want to combine a few holograms, it appears some problems how to combine them correctly. It's just not a question of simple, arithmetical sum of different kinds of holograms. Swissgram is based on main technologies like classical holography, 2D and 3D hologram, well-known dot matrix, e-beam, combination of them and plus it's possible to add different kind of optical effects and features. The present moment 3D AG is able to produce holographic diffraction, holographic ratings with special frequencies from 100 lines per millimetre, with special frequencies from 100 up to 2500 lines per millimetre with 5% precision and size up to 100 by 120 millimeters on the nickel sheen and the thickness of nickel sheen it's possible from 20 microns and higher. The diffraction efficiency is enough high as example 26% at 13, 100 lines per millimetre with different kind of profile. Normally it's symmetrical sinusoidal profile and with different depths and it's possible we have atomic force microscope which allows to control the depths and to make the depths according to desire of our customers. Some customers they ask to make a special kind of ratings with a pointed frequency and the depths and then it's possible to control and check the depths of our ratings with atomic force microscope and to measure diffraction efficiency and from measurements we can tell that it's very good stability of special frequency and groove depths with good sinusoidal profile and the optimal depths of the grading for the next step I mean after origination and hoping to make a recombined rating and then later after the embossing the optimal depths is between 0.20, 0.25 microns for the frequencies between 800 and 500 lines per millimetre and we determined that to find the optimal depths, the optimal diffraction efficiency it depends on the chemical development duration according to the curve is a maximum. 2D hologram, as you know 2D hologram is combination of holographic recording ratings. On the base of 2D hologram we can make two kind of lip effect, north, south and west east. We developed one kind of technology it's 2D diagram named when it's possible to extract the fine line from holographic rating and the diagram allows to create a kinematic sequence of Giliosh patterns which it's possible to put in different places of the hologram and Giliosh lines can be obtained in different form and size. 3D hologram we can produce real from real object or computer sinusoidal object, different depths of 3D hologram it's possible to make but normally it's better to combine with 2D hologram then 2D hologram works like reference level for observer eyes. We manufacture 3D hologram in two steps, first is recording on photo plate H1 step and then exposure of reconstructed hologram from H1 on the photoresist. And in this case for 3D hologram it's possible to obtain west east, west zero degrees and east flip effect on the base of this technology. It's one kind of, it's one sample of our 3D hologram it's one of the Swiss watch omega. Well known dot matrix hologram on the base of helicadmium laser it's possible to have two possibility with 15 and 25 microns diameter and normally we combine with 2D, 3D hologram from the dot matrix images. We use AFM measurements and we studied dot matrix images, I mean the dot how depends the size of the dot the depths of the gradient in size of the dot with electronic force microscope and we determined that for some cases when the exposure is very, very long then because of this overexposure the photoresist is totally removed and it creates like a hole in photoresist. It's one sample of this kind of, it's sample of one dot and which was measured by atomic force microscope in this direction and it's well visible that the depths is, depths arise from one edge to another edge. Stereogram we produce two kind of stereogram one kind of stereogram is black and white stereogram or two color hologram when we need to make the stereogram from let's say from real object then we make a few pictures normally the angle between these pictures two or three degrees and then we put all them together and when we need to make a black and white stereogram then we make for one set of 3D hologram we make exposure on three special frequencies in the case when it should be color stereogram then we expose on three different frequencies. Holographic Fresnel lens was obtained as a hologram of micro objective with a pinhole and it's very nice effect and customer they like a combination of Fresnel lens and it's possible to create different kind of structures it's just one sample of this structure. The level of Swiss gram can be improved by hidden feature with different observation angle angles normally we have a pointed angle but it depends on the desire of customer. We elaborated the technology of animated hidden feature it's a symmetrical animated feature we can make coded and un coded micro text it means it's micro text with grating inside and outside of these letters not grating or vice versa and now we can make the nano text with five microns size also coded and un coded with grating and without grating and we elaborated also key and lock system which allows to see the hidden feature with using of authentication key. With 3DG we elaborated a high speed technology at this high speed technology allows to create mat structure on the surface of hologram and the maximal size is 350 by 500 square millimeters and what is very interesting in this technology is that you can have a library of different kind of holograms and then you can put according to the graphic proposal different text different images in our company we have the whole chain for the production of holograms from the beginning from graphic design origination electroforming with a big size of the shim up to 1.2 by 1.2 square meters recombination machine step and repeat machine with size from up to 700 by 800 square millimeters and for some kind of holograms it's very difficult to to recombine for example when you have a 3D background then it's very difficult to recombine this kind of structure to stitch them very well and we elaborated a special stitching technology for such kind of wallpaper and this images and the last we have embossing machines narrow web 180 millimeters and we can emboss and reduces a sticker different kind of stickers and on the hot stamping foil but in many cases it also depends on the desire of customer and we can let's say interrupt the process on different stages for example some of customers they need they want to have just just original shim some of them they need to have recombined shim or even they want to have as a sticker applications for this visegram at the moment we made now approximately a hologram for approximately 60 different bundt nodes and you can see in our website this bundt nodes one of the the last achievement is Canadian dollars we already made 150, 20, 10 and 5 dollars with holographic stripe just I know that 150 and 20 now they are used in Canada just I think maybe this year or next year will appear 10 and 5 dollars with hologram for example another example is Swiss driver license which have a hologram from 3D AG inside we use also for check checks for promotion protection of high value products and brands and 3D AG has a large library of different patterns it's more than 300 different patterns for packaging that's all thank you for your attention thank you very much Yuri there are some questions can you explain how the key lock system works I can show you I have a sample of it just I forgot the key it's it's one image and then you have a key on transparency and then you put together at a pointed position then you can see this hidden feature which is on the image thank you very much
3D AG has elaborated a new high level optical security element, which is easily recognizable and identifiable due to the combination of different origination technologies: mostly classical holography, dot matrix, e-beam, combination of these and various optical effects and features. SWISSGRAM™ is not just a simple arithmetical sum of different technologies and effects. It is a combination of new sophisticated methods, which qualitatively improves the security level of holograms. SWISSGRAM™ is based on the following main technologies: 2D, 3D, dot matrix and e-beam.
10.5446/21317 (DOI)
Okay, well so that work was, that project, that mirror project will be where I work, collaborate just with one person and make a hologram mirror just for them. And so each mirror will be entirely different and they won't ever be seen in public. They're strictly to be seen by one person over and over again in their life. But this project that I'm going to talk about now, which is in progress, is, well I suppose, unique compared to anything I've done because it's dedicated, as you can see, to all those who share the holographic faith. This is a picture of pre-visualization of one little tiny part of it. And I'll just start by kind of qualifying that statement telling you a little anecdote before I read my paper. When I was at MIT, and Steve Benton was my co-PhD supervisor, I wanted to make a hologram that I'd, similar to one I'd seen in the MIT Museum that was made by Charles Vest. It was the heat plumes rising up from the wire and I wanted to know how to do it. So Steve loaned me a book out of his library by Charles Vest on interferometry. And apparently it had been a book that Charles had given to Steve and I opened up the front cover and inside it said to Steve for keeping the faith. So that sort of qualifies, you know, what I mean by faith. My interest in developing a hologram along the lines of what most people expect as the subject of a hologram, Little Princess, began in 2001 when I painted a self-portrait for the Portugese Memorial Award in Sydney entitled Dr. Dawson and Daughter After Forbidden Planet. The painting Oil on Canvas and Retro Reflective Glass Beads depicts me in a pose of the Mordorland by Piero della Francesca at the Duomo in Arezzo, but painted over black so as to be a silhouette somewhat like the famous Casper David Friedrich Wanderer over the mist. And instead of the view beyond the wanderer being of interest, the cape forms the background to a light emitting painted image of a hologram like it's a painting of a hologram. It's a painting of the hologram I'm making now. And in front of my cape you see an image of my own daughter playing with her puppy. The painting refers directly to the moment in the science fiction film Forbidden Planet where Dr. Edward Morbius demonstrates to the amazement of onlooking space voyages his ability to project an image of his daughter Altara from his mind and I quote from the film, because my daughter is alive in my brain from microsecond to microsecond while I manipulate it. And this is the Friedrich painting. I won't be showing any other original works for copyright reasons, but this just gives you the idea of the underpinning kind of light arrangement. So in the intervening years the idea changed its form from painting to hologram. However, my daughter has remained the subject. The legend of the true hologram has now taken the form of an artist's book, which is not like an ordinary book at all. And the book is about the intertwining of the filmic representations of holograms, the development of actual hologram types such as in Boss Rainbow Hologram Video, and the long history of the cult of lighting art, which enables a particular kind of legend to emerge for each beholder. The pictorial style is characteristic of special effects seen in films such as Star Wars, iRobot, Total Recall, Minority Report, Logan's Run, Star Trek, and Forbidden Planet are combined with specific mosaic gilding and dueling effects from Santa Pracera Rome, Santa Maria Majore Rome, the Baptistry of St. John Florence, Piero della Francesca, Madonna del Pato at Montecchi, Simone Martini's Annunciation at the Euphizia Gallery in Florence, etc. And the central figure is a classically draped figure who appears standing in a geometric interior. The visual arrangement of the work draws on to historic artworks, Piero della Francesca's Fresco Cycle in Arezzo, the legend of the true cross. I'm sure most of you have seen that magnificent fresco series. And also the dueled and gold enamel work at the Paladoro at San Marco in Venice. So the book cover of my book will be a gigantic metal shim, which is like this big by this big, which is the usual size of my previous interior holograms. And the subjects of this paper is really the ideas which underpin this work. So it isn't like a technical talk. It's more a talk about what ideas are coming to contribute to the work, because I wanted to speak to you, because in a way the work is dedicated to you to let you know this is what I'm working on now and I probably will be for the next few years until we meet up again. It might be finished by then. I'm not exactly sure. So this is how I imagine it. Okay, so on the cover there's going to be 15 major square images, which are each surrounded by a crotcher border. These are obviously, this is St. John's Baptistry in Florence, which is one of the major kind of pictorial inspirations for this work. So each of the little pictures in my holographic work are going to be surrounded by a crotchard frame. This is a picture of the computer graphic room which I've made. You notice the mosaic that you saw earlier was one which I had made over a couple of year period, which was scanned and put on the walls of this room. So I made the frame by drawing the geometry of the floor pattern of the building onto linen and then improvising your series of crotcher stitches with increase and decrease around the concave and convex shapes. In each scene, the crotcher frame is going to be rotated to display a plan view matched to the orientation of the room in which the figure is standing with respect to the holographic plate for that particular scene. This aspect of the work is derived from the pictorial style frequently employed in action video games where the picture space displays both a first person point of view and various inserts which add additional spatial information and viewpoints. So you can see that frame there. So if we're looking at the figure from this orientation, then the frame will be that way and then if we're looking from this way, the floor plan will be rotated. So onto the crotched frame will be several hundred jewels all made from holograms donated by companies and individuals from around the world in the pre-visualization, which I'm working on now, that pearls are cut from optical holograms of a plaster bust of Mozart's smooth wavy hair as this classical subject for plaster busts of highly reflective, highly stable objects remains a mainstay of optical holograms. This homage has a personal resonance as the world largest hologram at the time, the marble statue of the Venus de Milo made at the Le Borratoire du Physique Optique de Beaux-en-Saint in 1976 was the catalyst to my becoming artist in residence at Lowe. The jewels are made from combinations of digital patterns and analog holograms of the sections of frosted coke bottles that come close to the surface and the headlights and air guards of Harley-Davidson's. Another of the jewels have random star patterns mounted on security hexagons. The extreme precision in geometry of the security holograms, which include texts such as original, genuine and authentic, is broken down by collaging in a rather rough manner and this integrates their exquisite precision with the hand-bend crotia which is full of flaws. The brightness of these holograms from which the jewels of the frame are made is also much greater than the scenes. Cinematic conventions. The format of the legend of the true hologram is very much like most historic narrative cycles where we see the same characters struggle through a series of events, each of which take place in a different location in time. However, to indicate that this is a narrative legend of light in a three-dimensional place, every one of these scenes is actually set in exactly the same location and the character is fixed in one pose. The narrative stems primarily from the use of cinematic conventions of camera framing to create a story for this one static pose in a static space. The Hollywood conventions of using the figure or item of interest, one third or two thirds into the frame and the use of uneven close-up and open space on either side of the main character designed to make the main character seem alternately downhearted and trapped or free and open operate exactly as they do from static shots in films. And in addition to stock film shots such as close-ups cropping in aerial views are used. Such framing is unfamiliar in horizontal parallax only hologram subjects which are generally occupied by the center of the frame. Both the location and the character of my hologram are composites. The central figure in all the panels of this narrative cycle is a composite of two people made from three laser scans of the body, the body of a life-drawing model, this is actually my daughter pictured at the moment and the head of my daughter dressed in classical drapery with a hairstyle. This is the life-drawing model with a hairstyle and pose reminiscent both of the classical sculpture and the princess from Star Wars. This is the costume I made here on a dummy at home. So the shape of the building in which the figure is standing draws on the geometric composition of the dome of San Eva della Sapienza by Boromani in Rome which enables the reflection of light and imagery from all possible combinations of planar concave and convex surfaces. The floor plan is derived from the diagrammatic representation of the interference of two spherical wave fronts and the mosaic on the wall I made from pastel and aluminium tessera and which was scanned and applied to the walls of the virtual room. The origins of the image in science fiction. My approach to the pictorial style of the character begins with the best known use of a hologram in a film, the image of Princess Leia in Star Wars Episode 4 A New Hope. This film is probably responsible for generating an idea of what a hologram is in the mind of the general public, more than any real hologram or any other film, due to its widespread availability and the sheer number of people who've seen it. The hologram effect is blue monochrome and shows a miniaturized blue translucent classically draped young female with a light ray attached to her. So too is the central character of my story. The role of the hologram in this film is also quite specific. The message is a read-only memory, not real-time means of delivering a message as our holograms in subsequent episodes of Star Wars. Specifically it's a person-to-person message from Princess Leia Organa, which she addresses to the keeper of the force through her body language and accompanying sound message, help me oh we one can over y'all my only hope. Well my character also carries a message for the viewer. The antecedent of this scene in Star Wars can be traced to a scene in Forbidden Planet. Dr. Edwin Morbius projects an image of his daughter who was miniature her figure predominantly white, standing on a small disc of light. Significantly the image is a projection from Dr. Morbius' brain with no need for any kind of physical device in real time, whereas in Star Wars the message is a read-only memory originating from some kind of mechanical device and projected by an expanding light ray. I'm now going to show images of seven of my holograms from the book cover, three of which are on exhibition, pre-visualizations are on exhibition at the castle. And so you'll be just seeing only seven of the 15 pre-visualized images as I'm going to sort of speak to you about some of the ideas underpinning the other seven images which have not yet been made. So in Minority Report directed by Steven Spielberg there's a combination of both the idea of a mental project and a mechanical capturing of an image. In the temple the real-time visualizations of the precogs who lie in warm water while electrodes transmit real-time images from their minds regarding their visions of the future are somehow captured and moved around by Tom Cruise with gloves, tips with little blue lights. He sorts through these mental images in order to track down what is going to happen in the future. But also in Minority Report there is a kind of read-only memory version of a holographic kind of home movies where you see Tom Cruise look at old movies that look like holograms of his daughter, no, of his wife and his missing son. And these projections have overtones of earlier visualizations of holograms. Particularly Minority Report there is a seems to be a screen behind and there's like a cutout hole where the figure seems to sort of burst through the picture plane. And there's a kind of a rippling back as the image speaks which seems to kind of generate a type of excitement so that there's a very active kind of flickering part of the image. One thing common to almost all representations of holograms in films is a kind of comparison of the virtual representation of a person and the real representation or the real experience of a real person. This is the aspect of holograms which was picked up by theorists such as Ambrou Echo and Jean Baudrillard and the notions of Sima Lacra and the hyper real are in turn, they're reiterated in films such as Logan's Run which is probably the only film that uses real holograms. At the conclusion of this film the hero's hologram is interrogated rather than himself and the film's punchline is that there is no sanctuary, no kind of escape from the world and the laws of Logan's Run. The filming shows the hologram made by Multiplex Company in 1976 rotated in reverse, producing real-time special effects since the face is distorted by Times Mirror. In a similar way duplication and replication surrounds the holographic character the doctor in Star Trek Voyager episode 36, Signs of Life where the doctor attempts to save a woman's life by making a hologram of her and treating her independently of her holographic image. Developing this theme further of having a hologram as a separate being to the real being in total recall Arnold Schwarzenegger calls upon his hologram which is appears as a kind of mirror image to himself and which enables him to escape. The hologram which cannot be harmed or killed has all the visual qualities of Schwarzenegger but can appear in various locations in four three dimensions apart from himself. In total recall just as in many films with holograms there's an important read-only memory message left and a character who acts upon it. In the case of total recall the scene with the hologram is prefaced by a conversation which Schwarzenegger has with a pre-recorded image of himself on a laptop. Do you remember he has the tail wrapped around his head and he's like looking at his image he goes if you're looking at this you're not who you think you are. So this idea of a message which is left and which will have an impact sometime in the future is a theme running through films with holograms and probably shown in its fuller extent in the film I Robot in which the actions of the hero Will Smith a Detonement almost entirely by his conversation with a read-only memory message of a hologram a hologram which is left just prior to the death of his dear friend and like many short and mysterious messages this one leaves a margin for interpretation by Smith yet is quite impenetrable to other people. Again it's a single person-to-person read-only memory message in which the character of the holographic figure and the nature of the dialogue are not just simply the text but the way in which the text is delivered on all the nuances that can be generated only through the experience of the presence of the person and the shared friendship and this is what enables the main character to act. So you can see I'm leading to an argument here about the kind of presence that a person has in a hologram the particular kind of presence that can enable a kind of continuing dialogue between real and virtual people. The visual representations of holograms are diverse for example in total recall the holograms just slightly transparent but in full color or as in iRobot it is completely flat it yet appears out in space and like like a speaking larger than life two-dimensional screen but also in full color and some of these holograms and mute others speak and some of them demonstrate that they are different from reality by having some kind of optical glitch shimmer or sparkle or with missing parts of their image so that they are characterized by a lack of a lack of perfection and a lack of smoothness in their rendering and they often have scan lines running through them and in general they have a kind of transparency. The creation of a holographic character can be a way of projecting ourselves towards a person remote from us in time and space in order to call upon them to act in a particular way and what's common to these representations is the sense that there is an empathy with the person represented in the image which is reinforced by the energy of light. The engaging quality of light itself is frequently referred to by artists who produce holograms and in the section on lasers and holographic art in the electronic age Frank Popper comments in order to build a and historically legitimate aesthetic of holography one has to detach oneself from dependence upon the photographic paradigm so important in understanding computer art. The persistence of this paradigm reveals itself especially in the overemphasized third dimension of holography. Taking a different viewpoint one can postulate the self-creating power of light as the creative foundation of the holographic medium. Concepts from science fiction novels. Many of the concepts underpinning the depiction of holographic characters in films are derived from sci-fi novels. The classic 1968 novel Duandroid's Dream of Electric Shape by Philip K. Dick, an absolute must by the way, which is the basis of Blade Runner, describes the acts of empathy with a virtual character image in a compelling way and I quote, he crossed the living room to the black empathy box. When he turned it on the usual faint smell of negative iron surge from the power supply. He breathed it eagerly, already buoyed up, the cathode ray tube globed like an imitation feeble TV image. A collage formed, made of apparent random colors, trails and configurations which until the handles were grasped, amounted to nothing. So taking a deep breath, he steadied in himself, he grasped the twin handles, the visual image congealed. He saw at once a famous landscape, the old brown barren ascent, with tufts of dried out bone-like weed poking slanted into a dim and sunless sky. One single figure, more or less human in form, toiled its way up the hillside, an elderly man wearing a dull featureless robe, covering a mirage as if it had been scratched from the sky. His own feet now scraped sought purchase among the familiar loose stones he had crossed over in the usual perplexing fashion. Physically merging, accompanied by mental and spiritual identification with Wilbur Mercer had reoccurred, end of quote. In New Romance, 1986, William Gibson furthers the potential of the relationship between the reader only memory known as the flat line and a living person. The hero, Dick's, is guided by the rom of his deceased friend. And I quote, he turned on the tents of the side, the hawker side, the crisp circle of light fell directly on the flat line's construct. He slotted some mice, connected the construct and jacked it in. It was exactly the sensation of someone reading over his shoulder. He coughed, Dick's, McCoy, that you man, his throat was tight. Hey bro, said a directionless voice. It's case man, remember? My amy joe boy, quick study. What's the last thing you remember before I spoke to you, Dick's? Nothing. Hang on, he disconnected the construct. The presence was gone. He reconnected it. Dick's? Who am I? You got me hung, Jack, who the fuck are you? Ca you buddy partner? What's happening man? Good question. Remember being here a second ago? No. Know how a rom personality matrix works? Sure bro, it's a firmware, firmware construct. So I jack it into the backs I'm using. I can give it sequential real time memory. Guess so, said the construct. Okay, Dick's, you're a rom construct. Got me? There is a parallel between these fictional examples of the interaction of living and read only memory remains of people and the appreciation of historic artwork which depicts figures and stories long past to contemporary viewers through artworks which spatially encompass and interact with the environment in a direct way. The element which enables the transference of an immediate circumstance of viewing into the static composition is reflective media which is either used in totality as in gilded mosaics or embedded within matte surfaces as in gilded halos and frescos. Though each of these techniques give rise to a highly intriguing arrangement with the viewer as a direct experience, some of these interlacing of reflective with matte imagery don't really transpose well to a hologram. The walls themselves in demonstrating the effect of multiple reflections of the figure and light ray within the space as though the room is completely enclosed rather than being intersected by the hologram plane, void the effect of the reflection of the viewer's space which is then reintroduced through the highly reflective metallic surface of the entire hologram in nickel. And intentionally there is the use in these images of the full dynamic range which of necessity diminishes the diffraction efficiency of the image. The dark and mysterious zones of the interior and the bright glints of the gilding and particles are integral to creating an atmosphere of legend. These less bright major hologram panels are framed by brilliant embossed holograms in the way that the paladoro narrative scenes are dull but framed by exquisite jewels. And in the case of the paladoro the size of the jewels is the same size as the people's heads in the enamel work. So at this minute scale something quite incredible can happen that there's an analogy that the person is in fact a jewel, a shining light. In each of these images there is a reminiscence of mosaics discrete image units usually particles within the three-dimensional spatial volume which vary in scale to the overall image size and image resolution. In the aerial view hologram there's a light ray which is solid and in some cases the particles are horizontal, rotational or vertical. There are two examples which focus on the mandala drapery, the clothing being a kind of a light field around the figure. And in the first example they're very fine and vertical in orientation and the density of the gilders like rain, that's one of the pre-visualizations in the exhibition. And they're more densely clustered around the figure. The second example has a point of view of the edge of the light ray attached to the figure and the size of the units approached that of a wall mosaic. That's alluding to the preference at various periods in art for light to illuminate a figure from a particular distance and angle. Conclusion, since the scene in Forbidden Planet where Dr Moby has projected a holographic image of his daughter from his mind, special effects representations of holographic images in film began to establish a genre of their own quite independent of actual types of holograms. This genre of fictional holograms holds two important strands. Firstly the pictorial qualities which have often been compared with the real qualities of holograms. But secondly and much more interestingly I think the roles and characters characteristics which are now routinely associated with holographic characters and the empathy that they have with real people. Thank you. Paul has got some of the biggest ideas in art holography hasn't she? Does anyone have any questions? Okay. Okay there now I have the booming voice. Yeah the Logan's Run hologram was not the only hologram in a movie. It was the man who fell to earth. There was a hologram of his family on another planet and it was a multiplex hologram and at one point he rolls it out and says this is where I was. That is fantastic. Part of the reason for me giving this paper now before the work is finished is that I really want to gather all that information so thank you so much. Can anyone like if they know of anything related to this topic I would be so grateful if you would email to me. That would be just fantastic. Great question. In case I forget to email the time machine has got a hologram the recent version has got a hologram that is found in a cave and is a walking talking person with a message about how you know what had happened and is exactly an example of the empathetic friend and a message that has to be represented. Thank you. That's great. Also what is the very first? Recall this. I think probably we found we did research when we were working on those holograms at multiplex and we found that probably one of the first film pieces of the modern age or modern era was the one by Th-H-11 that was Fritz Schiff Capra and that was probably the first of the contemporary films to have had a holografer who drove a car and did everything. Fantastic. The other thing I would like to do eventually when I show this book is I'd love to organize an exhibition that had those holograms or associated with the films. So if anyone has information about where those holograms are or who made them or where I could even see them or record them in a film I'd be very grateful to find out about that as well. Yes, David. I don't know why I didn't think of this earlier but in terms of the empathetic friend Quantum Leap the hologram that sort of continually follows in back in time but can't interact but has some sort of piece of information that's going to change the story. So he's a plot device hologram but he fulfills the same role of friend with message that is going to be pivotal in some way. The old guy who's the hologram in Quantum Leap. So good, right? Yes. Okay, so I guess we should move on. So ladies and gentlemen, Paul Adore. Thank you.
My interest in developing a hologram along the lines of what most people expect as the subject of a hologram… a little Princess … began in 2001 when I painted a self-portrait for the Portia Geach Memorial Award, S.H. Irvin Gallery, Sydney, entitled Dr Dawson and Daughter after Forbidden Planet. The painting (oil on canvas and retroreflective glass beads), depicts me in a pose of the Magdalen by Piero della Francesca (fresco; Duomo, Arezzo) but painted over black so as to be a silhouette somewhat like Casper David Friedrich’s Wanderer over the Mist. Instead of the view beyond the wanderer being of interest, the cape forms the background to a light-emitting painted image of a hologram showing my daughter playing with her puppy. The painting refers directly to the moment in Forbidden Planet when Dr. Edward Morbius demonstrates to the amazement of the onlooking space voyagers his ability to project an image of his daughter Altara from his mind "… because my daughter is alive in my brain from microsecond to microsecond, while I manipulate it…"
10.5446/21318 (DOI)
Looks like I'm too often on a podium. This time I will not speed up and try to make it clear what we do. It's a growing demand with all understanding and respect to holography, growing demand to real-time system, system when an operator can see immediately, or a viewer can see immediately 3D image, not necessarily in a fully volume, but at least in a restricted volume, and that's what I'm going to talk about. It sounds a bit strange for me that after almost 30 years in holography I'm shifting out. I must say that there's a recognition of reality and a problem, not everything we can do in holography, hopefully one day we will, but there is a strong demand and a lot of requirement, and everybody who saw Star Trek wants to have a holodeck, almost impossibly, but dreams still exist. I would say we're trying to bring it one step closer, and I will talk about, I will introduce to the system, it's not much theory just a depiction of what system does and what we do in that aspect, and there are a variety of methods for displaying 3D images without goggles. That's an essential part, you all are familiar with the goggle-based system, unfortunately it's not always useful and not always possible for an operator to have either goggles, or helmet display, head mounted display, it restricts the motion, and specifically in an entertaining area when you think about thousands of people having goggles, and all problems with symmetry and all that stuff, it's not really convenient, although I must exist and people like it. Gaming industry, so there are aspects. So all of these systems project and images, and they have to multiply and time to see sequence of these images, and it has traditional limitations and we're trying to overcome this. What is needed? As I said before, there are glass-based system, head mounted display, shutter, glasses, single user, auto-stereoscopic, and we have recently this discussion, what is that auto-stereoscopic? It's becoming clear that whatever is not stereoscopic, even holography belongs to auto-stereoscopic, all this stereoscopic is goggle-based or system that you mount on your head, and the next generation of these displays should avoid this. So what kind of displays we currently have? We have holographic displays. That's the most advanced. Unfortunately, the most difficult to realize in real time. Practical solution is unlikely in real time development because of the processing, just amount of the information you have to process and transfer, reduction of information, redundancy of the system immediately introduces additional problems related to the noise of the system and flickering effect and all that stuff. So I have a hard time to see it in my life, although my life might be short. Who knows? Volumetric displays, very effective, very efficient, but have deficiencies in the latency effects and problems that are difficult to solve. I saw these auto-stereoscopic systems at a demo. They're really good, but still you see these strips of the light and images, and it gives you a feeling that it's unrealistic. So we come to auto-stereoscopic, and they have been demonstrated very well. They have their own limitations, like a viewing zone, but they do produce a realistically volumetric image. It's like looking at a big-loss system, but without any goggles. That is why we kind of look at that and we went this perspective. What we currently have in the art of 3D systems, stereographic systems with a lenticular barrier, they allow you to look around and they're compact. They have limited interactions, so you can't really work with the image, which is very often necessary, and that's the main area of real-time 3D display. Toshiba produced modulated grating LCD. About 30 viewing zones seem to be large enough, however, limited interaction and a very high cost. Actuality, I just told you about their limitation, though very good, but if you think that you need to spin, once you come to the high speed, you can't really make a big system. It's a very limited-size viewing area. We came to our autostereoscopic display system, which currently produces six viewing zones. We are looking at 16 at the end of phase 2 of this development, and it should give you the possibility to look around of the object, large head box, scalable. The problem is that it's a bit volumetric, I would say, in the sense of physical dimensions. What we use, we use temporal multiplexing. It's a fairly simple technique developed at Cambridge by Adrian Travis, one of the co-authors. What it does, it has an LCD or any kind of shutter that produces a sequence of the images for left and right eye at such a speed that you just don't resolve. We can resolve something like 30 Hz above that, we don't. This one produces about 1000 Hz, so you really don't see it. It spins very fast. That has a normal advantage, because it can produce a high-resolution image with a high speed. The disadvantage of a high frame rate and corresponding bandwidth, but that's solvable with the upcoming technique. We'll talk about that a little bit later. Principle of operation. As I said before, it produces this kind of a multiplicity of the slits that scan in front of you in a viewing zone. I'll try to see if I can. So that's what has been established using LCD. What we try to do at this moment, we're trying to use a better technology, DMD with a high pixel number and a high frequency. Graphic GPU recently increased rate and rendering ability and high bandwidth computers. That's kind of a new technology, new thing that came recently. How it works, here is, I hope it will show something. What we see here is an optical train with just at this point, we look at the slit at the end, at the bottom viewing position, and that is slit for the red light. Now what you need to do, you need to place red, green and blue sequentially or better simultaneously for demo purposes that go sequentially and scan them that you will see the sequence of lateral position. That's where your head box is. That's what the system demos. As you can see, it scans from position to position and that's how it works. It's a fairly straightforward system. As a result, what you will see, you will see the sequence of these images and it produces good effect. At the moment, we are looking at a shutter based approach when LCD from Cambridge University is used for high speed shutter. LCD shutter with a high speed and high throughput, meaning that you don't have much losses in a system and what we have, we put in a phase one which is pretty short and should be very effective. We have a DMD projector that goes through the set of the optical train and a high speed shutter in this area and then through the Fresnel lens create an image. Basically, you will see the viewing zones here. This is colored differently and the real image of the object behind the viewing lens. Let me see if I can show what that sequence looks like. It's fairly simple. You have left eye position periodically and then a jump to the right eye position which you don't resolve in time. They seem to come simultaneously, however, because the eye perceives that in a flash way with a very high speed, you just don't resolve them. You see the two images coming to your eye and you can move the head to see slightly around. That's like three viewing zones. You can go to higher level. You can have more zones like three images or four images and they go sequentially, obviously. Actually, the advantage is that you may use it not for a single operator. You may have three, four, five operators depending upon number of the sleeps you have. In addition to that, what you can also do, you can put a pie-like system when you have a sequence of the monitors and every operator will have its own system. Now, I would like to admit it's not a touch-like system. You can't really go and touch it. You will see it as projected in front of your eye, so you don't need any goggles. The system will produce this kind of image in front of you, but you can't touch. It's not like a touch screen so far, which is also under development in Japan, I believe. It produces a good look around capabilities with no pseudo-scopic effect because here you have to be careful. The image is produced like in holography, a real and pseudo-scopic. So if you overlay two of them simultaneously, then the confusion comes. What is the reality when you see the image like? Shutter synchronization. That is where the core of technology is you need to synchronize the operation of the shutter with the operation of the DMD and produce these images sequentially. So it takes a high computing rate. It takes synchronization, and that's kind of the core of the system. There are several issues that the core shutter synchronization is one of the most important. And then, of course, how the system looks like. What we're currently using in a design, it's an LED projector. You may use LED of different color. I'll talk about that a little bit later. And it produces a light illumination that goes through the optical train and a Fresnel lens synthesizing image somewhere here in this area. But where it comes from, the LED illuminates through the optical train DMD, and LED is controlled synchronously with DMD and then controls the shutter system. So it's a loop that is established inside of the, in the heart of the system and produces these sequences to shut the images in an observation plane. DMD is kind of the central core of the system. Currently existing at about 10 kHz. New DMD 3000 allows you about 16 kHz, and that should be enough to produce 16 viewing zones at a high speed and a full color. Just don't forget that you produce these images sequentially. So you really need a high speed and a full synchronization. But very fortunately, the system is on the market. You can, we can get them and we can buy them. And they produce a very good contrast. Modulated light source. You need to flash this light with a high speed. And again, very fortunately, LED of higher brightness with a 5 ns front, which is a very high speed, can be, can be just obtained from the market. And the LED currently, you can get them in three different lines. Red, blue and green. I'm sorry. Producing a very good white color at the center. So the balance of the color is available. That's not a big issue. It's just a matter of illumination, obviously, and the scheme downstairs, and the right bottom corner shows you that you can get with the right sequence of the pulse duration, you can get the right color for that. And the brightness is good enough to produce 150 nts gamma, ntcs gamma system. So that's also available. We did come with optical design in the third generation, where we used just a simple optical lens that's available, not a special design. And we could produce a very good field of view. Some aberrations at the edges, but at just the first step. And, let's do that. And here is an image rendering in the display. You see the sequence of the images, which goes both in time and in perspective. I understand it's difficult to look at this image, but that's a rotation of the satellite, and you see slightly different sequences. It might be better. That's the first generation of design. The first step in a design when we just got the commercial lenses here, display and put them together, the Fresnel lens sits here, and the viewer should be here. So you will see this. It's a tabletop small compact system that produces about four to six viewing zones, depending upon the mode of the operation. And here you see the realization, real image at the top. It's getting in its own mode. The real image, then the viewing zones, and image at different location and position. So that's basically how you would see the image in a full color system. And this is a sample of these images. You see that they are slightly shifted to left and right eye, producing volumetric effect. That's basically breadboard and here performance is resolution, which is about 800 by 600, with the image size about, I would say, 10 centimeter by 10 centimeter roughly, and optimal view distance about 84 centimeters. So it has a typical for this system parameters. So VGA, video connector and optical synchronization through VESA control. And that's a perspective view. Basically what you see here is the perspective design of the system, where you have a light engine here, that illuminates DMD based special light moderator, goes through the relay lens, through the rear projection lens, to project the image in the viewing zone. That's a field lens here, and you see that this field of view strips, through which you will observe this field lens image. As I said, we have two different approaches, and that's very unfortunate for this aspect. Point display, it's a bit ironic. I should step a little bit back. In 1989, we had a big display conference, when Steve came and he reported on his electro-halography. Many of you know, electro-halography became kind of the second after rainbow for Steve, and he put a lot of efforts to promote it and to develop it. Not everything worked as he wanted, but that was a big progress. There was some disagreement, and then when in 1995, Denysjuk visited me, and at that time I was setting up the group in Bogota in Colombia, we discussed display issues, and I said that I had some problems with Steve, and I was very open with Steve on that, with Steve approaching electro-halography, and I said that I had this idea with point-like imaging system. Denysjuk jumped at me and said, no, no, no, that's my idea. So we had a little fight. We wrote a paper together, and it's a bit ironic that now I work with Air Force, and I'm not at this stage, not allowed to completely describe this system. So the system was born still when I was not in the US, but now there are some restrictions. But perspective, the main difference is that we will have a screen in this system, and the role of the screen to combine these aspectograms, or spectral points, special points, that exist in different areas, in different domains of the pseudo-image, to transfer this image in a view, and that kind of a small demo that should tell you how it works, because they will scan through this screen, producing the images, both in time and in space, and every point of this, or every strip of the screen produces slightly different images, and that's how it should come to the reality. We are currently working on a demo of this system. Hopefully the demo should be accomplished in a half a year, so we will see what will happen. I might be able to report more later. I know that Yuri was working on that with his group independently. There was a point where we couldn't exchange much of information, but at least at the early stage we had a paper that we put together. In conclusion, we did a DMD-based 3D vision system, 3D display with a full-color imaging, and the DMD has definitely advantages over any SLM or a projection system. We did an optical design, we built a tabletop system, and currently working on both phase 2 for this DMD-based projection system and aspect-based aspect and points of hologram system too. So that's basically it. If you have any questions. APPLAUSE Maybe we have a few questions. At the back. Yes, so I can disconnect. If you want to make more viewing zones, what about the brightness of the image? That's a good question. We are looking at 16, that's the maximum possible from the point of view of angle, because you are restricted with two factors. Angle of view, how far you can go, and brightness. Estimation shows that we can go to 16 reliably, 32. You see the decline, I didn't put the figures on the numbers here, everybody blames that when you come to too much of math, that's not what people like at the conferences now. But 32 is still valuable. The problem is that when you go to 32, then you are really limited by the bandwidth of the system currently, because even with 16 kHz DMD, there are still limitations in the number of viewing zones you can produce. But with LED brightness, don't forget that you have a very narrow angle of, well, not very narrow, it's enough to see, but the diffuser that you have in the system, it's a narrow band diffuser. So it's... Any further questions? Do you need to ask so many... Do you need so many slits with complete system? Isn't it possible to reproduce this... What do you have here, one slit? And a slit runs there quickly. Yeah, but about... Canary-pulling is 4, 4 and 4. You can do that, but that's your 4th viewing zone. If you would like to produce 16 viewing zones, this slit should occupy 32 positions, because you need to produce sequentially left right eye, and you should repeat it for 16 viewers. Yeah, but the viewers are always stable, and it's not necessary to have this high angle there. It depends what you would like to do, that is very true, but it depends again what you would like to do. You may have 16 viewing zones from the same position, but if you would like really to have a reality, then you need to have a possibility to look around, that means that you have to have your... There is a head box, and you have to look at that in a different position. That tells you that then the slit should move to a new position. Yeah, but if you move, can you see any... Flicker? Yeah, flicker or... You shouldn't. It's a fast system. You have very high speed. That's really good. At least for 4, I didn't see any flicker at all. I think it's important to just put it in context. This display over here, which is a different type of display, obviously a holographic display, has 640 effective light boxes. Correct. So you're talking about a real-time display, which at the moment, using current technology, could give 16 light boxes. Correct. But what do you think, perhaps, of making 640? Well, yes. Let me put... It's a very relevant and very good question, because we discussed that aspect. What is needed for industry and for a variety of industries? I'm not talking just the military, the defense system. And the defense system in particular. We know the situation now. We know how many errors have been done. You know the fact that Canadian military were just injured by accident. This is because of the limitations with the 3D vision. You need real time. With the time of operation of this kind of the system, there is no virtual possibility to produce this kind of the realism. So that is why. Now, you may look at that different, too. You may say, okay, I have just a second-generation hologram, not dot-recorded, but real one, and that has a pretty broad angle of view, a few have properly done, of course. And bidirectional, not just a single direction, not a single direction perspective. Yeah, it's a stereoscopic effect in both directions. However, that one also has a limitation of brightness. This one brings a lot of brightness, because in every sleeve you have a narrow direction. So it's kind of compromised. Whenever you need something in real time, then the bandwidth... It's a product. It's always a product of the bandwidth and the resolution. Absolutely. Thank you very much, Vladimir. We'll go to the next presentation now.
A variety of applications, such as entertainment, medical, design and engineering, can significantly benefit from the development of a goggles-free 3D imaging system. Although a number of such systems (spatially multiplexed, volumetric, autostereoscopic and electro-holography) have been suggested, practically examined and modeled, their practical implementation is far from complete, mostly due to unsatisfactory imaging quality. Recent technological advances, both with hardware and data processing software, open a new perspective in the development of advanced non-goggles based 3D displays. Approaches of special interest include temporal multiplexing and point-aspects, as they offer a goggles-free solution to 3D imaging. In this paper, we discuss the initial results in the development of 3D displays with an improved image quality and increased refresh rate based on the new concepts, namely, temporally multiplexing and point-aspect. The paper presents a thorough review of the current state-of-the-art of these two techniques and a perspective in their further practical engineering and realization.
10.5446/21276 (DOI)
All right, first thing I'd like to do is give you an invitation. The Butler Institute of American Art is in Midwest of America, halfway between Chicago and New York City. It's a rather august organization. It's a well thought out museum and has a collection from the colonial days of America up to the very present. They put in a $7 million high-tech wing a few years ago. And since they've opened it, we've had exhibitions from Patrick Boyd, Eduardo Koch, Shuman Lin, Andy Pepper, Ikuo has shown there, Anna-Marie Nicholson, Sally Weber is preparing a show there, and George Dines is thinking of having a show there. So you are invited. It's a very nice space. You'll be looking at the space in just a few minutes. My show was put up in 2004, and the impetus for the show was to sort of clear the deck of a lot of holograms that I had sitting around for a number of years, take a look at them, get them out in the air, get them framed and shown, as well as using a lot of the commercial equipment that I knew was not going to be available to me for very long. Chromagem was a company I started in 82. We had a very nice run of doing commercial works for a lot of major companies. And I knew we were going to be folding, and there was just phenomenal capabilities that weren't being used. The show was called Presence, and this was the opening of a 4x6-foot digital print. And to enter the gallery, which had a light trap, you walked right up to a wall. So you had to get close to this digital print. And I liked this image because it had a presence about it, but it still had a mystery about it. You weren't sure if this was an insect, an animal, a plant, an alien. So it had that mystery, and it also washed you with some very pleasant and uplifting colors. If you turned in one direction, you saw this reading from the book of Boganon, which goes something like a drunkard sleeping in Central Park, a lion hunter in the jungle dark, etc. And a British queen was cut off at the top line there, all fit together in the same machine, nice, nice, very nice, nice, nice, very nice, so many different people in the same device. And that comes from, what's his name? I just forgot his name. Kat's Cradle by Kurt Vonnegut. And within that novel, he's made up a fictitious religion that two guys invent when they get stuck on an island and the people there need a little uplifting, so they invent this religion. And this is one of the sacred texts, so you can imagine it being said in a Caribbean accent. If you go the other direction, you run into another of the sayings of Boganon. Lion got to hunt, bird got to fly, man got to ask himself, why, why, why? Lion got to sleep, bird got to land, man got to tell himself, I understand. And I commissioned a local calligrapher to illustrate it for me, my 10-year-old daughter. As you come into the space, this might freeze, this is frozen every time. Oh, it's okay. Here's the space at large, a quick profile of the space. No, it's not. At any rate, you see the space behind it. This is the first piece in the show. I don't know why this is, we had it showing full size. All right. This is the first piece that you come to if you come to the right. And it's a Hindu blessing. It's Sarvesham Svasthir Bhavatu, Sarvesham Mangalam Bhavatu. So it's a blessing to the people who come into the space. May all beings dwell in happiness, may all beings dwell in peace, may all beings attain oneness, may all beings attain auspiciousness. So you notice it's not human-centric, it's being-centric. And this particular hologram is a combination of dot matrix. That's terrible with that framing up there. It's a combination of dot matrix. The text is in dot matrix. So that's the tool that I won't have readily available to me anymore. And the colors, the sort of aurora borealis colors that appear behind it, are a second generation hologram of the dot matrix. People don't do that. A hologram dot matrix is a two-dimensional picture, you know, just animated. But when you make it the object of another hologram, you get these washes of colors that go back and forth. So the show had various groupings. When I went through my holographic collection and saw what was there, well, there were a whole bunch of nature studies. So here's four of those nature studies. They're called life sprites, and you're seeing a metal shim curved in the background, and it's a stereogram, and above it is a UV curable ink pool of a dot matrix pattern that I designed. So the two of them are over top of each other. It looks a little bit like that. You're seeing, I think you can see the leaves in the background. The stereogram was a very dense foliage of day lilies in the woods. So the camera was panned right in front of them, and they looked very solid and real. And you felt like you were in the world of the plants and the animals, so it was so close. You couldn't see the top or the end, so it implied a space that went off in both directions. And this is the one. Oops. Wow. I'm having a hard time getting this particular video to show, unfortunately. We'll go on to the next one. The video is much better than that. This is one of the other nature studies. It's called beetle buddy. I was making a video of a Japanese beetle, which is only a couple centimeters across. And there you can see the animal's arms. And while I looked in the camera, there was this little creature that popped its head up, wiggled a little bit, and then popped its head back down. So there he is. He's got those little cootie antennas that came up. And so I showed this large, it was about two and a half by four foot digital print off to the side. And you can see that as well as the not very good color on that particular stereogram. This is a portrait of a wasp color, and that came out really well. There was a portrait of a beetle in there. To do it, I taped a macro lens on top of a home video camera so it had amazingly horrible depth of field, really tiny depth of field, but it gave that lush blurring of colors that was pretty good. This nature study is called duet. And I was struck by the similarity of these two forms, a dead leaf and a dead fish, and how they look like two dancers leaping up. They're embedded in resin, so you can actually record some of the light rays. When we usually take a hologram of an object on a table, the air is gone. There's no remembrance of the air, the temperature, the little bit of fog in the air. When it was embedded in resin, you can actually at the top of the piece here see some of the rays of light. And in some of the other resin embedded pieces, the pieces are more of ball bearings and things embedded in resin. And you can get a whole bunch of play of light traveling and dispersing through the piece. And it looked like that. And it's all done. It's put on a piece of cloth with bronzing powder sprinkled on the cloth. This is a two-color reflection hologram. It was done in actually, it was done in about 86 on a wooden table in my basement. I don't recommend it. So it's the triethanolamine technique that we were talking about earlier. Rauschenberg is a famous, now there's a few portraits. Rauschenberg is a famous American painter and he was at the Butler Institute of Art. He worked with Merce Cunningham and John Cage in the 50s. One of his first works was this goat, a live goat that was painted and put between a tire and his collaging. So he created quite an uproar when he first started in the art world. And the director of the museum, so you know we're getting people like Rauschenberg to the museum, so that'll give you an idea of what the museum, the quality of it. When he first, he came to the museum and I was asked to do something for him, but we didn't want to tell him what. So he was signing posters. So I knelt down in front of him and videotaped it and he, you know, it wasn't right, it wasn't right. You talk about getting the queen to do the right thing. You're making a hologram of somebody who doesn't know you're doing that. But at one particular moment he did smile and he said, oh you're getting close enough to see all my wrinkles. And that was the two or three seconds that went into the hologram. Now around him, all this mishmash of stuff that you're seeing are some of the various pieces that he has made. So it becomes kind of like an eye spy book. I'm trying to get the offs button for that. Where you can look in and see. So for instance right here you see the goat piece right here. And over here during the course of the video this element rotates and over here another element rotates. He made large plexiglass circles that were motorized back in the 60s. And he did plexiglass drawings and collages and one would turn against another. They were about four feet in diameter. So here's one of those pieces and it turns. And there's a little video footage of him in one of the early 1950s performance pieces in which he's holding a woman horizontally while somebody else holds her by her feet and they're on roller skates with parachutes going around in circles. That's what they did on Saturday night in New York City. Paul Jenkins is another famous artist. He's American but he's been living in France and actually has been acclaimed a French national treasure. He did the very, very large canvases of color washes. They're very beautiful. And he was given a lifetime achievement award and this was it. He made a collage, a photographic collage. Old man thinks of young man, young man thinks of old man. And he did the photography for it. It was him and in the insert right in the forehead was him as a young child and vice versa. So I simply took those two photographs and collaged them together holographically one in front of the other. You could just see the young man face here. It doesn't come out too well. And they're separated by about an inch or two and we gave this to him at a dinner. There's a huge exhibition of his work. The Butler actually has three buildings. The one building is brand new and has a very large sculpture garden. And the whole entire place was filled with Paul Jenkins' work. This is a triptych of my daughter when she was very young. This is again UV curables. The last one or two were metal shims. This here is a very young child, an older child. The one in the middle there came from this digital footage. I was capturing the footage and the camera was going. The heads were getting dirty. I thought, oh gosh, I really need this. But then when I looked at the computer, it had so beautifully framed her eyes and mouth. And these little squares appeared and disappeared as you go back and forth. I don't have a really good footage of the hologram itself, but it was such a lovely effect. I kept it. That's just a couple of motions of her as a tiny child with these yellow toys reflecting light in the bright sunlight. So it's very nice. This is the one that you're seeing here was from three digital photographs that were morphed in between it against a UV pool. This is actually a commercial tool. It's a UV curable ink gang up. That is to say it was a small dot matrix hologram and a special, very expensive machine. It took the six inch blocks and repeated them to a four by four foot square. This is the pattern that's used on Colgate toothpaste, a scot up perfume, chic shaving creams, as well as some other products. So it's a very long and laborious process that involves everybody at the ChromaGem Labs. And this piece was literally pulled out of the garbage. Because speaking of things having to be perfect, if it's on a box of Colgate, if you could see any dot on it, they don't like it. So, you know, most of it's going to be covered by ink, but it had to be perfect and it was checked by a number of companies. And if anything was wrong with it, it could be 10, 20, $50,000 worth of reshooting. So before it left our lab, it had to be perfect and they were being thrown away one after another. And I just couldn't stand it anymore. They were in fact being rolled in the garbage pan and the light was hitting it. You know, I go, oh, God. And the next day it would be in the trash. And so I got permission to use some of these in an art show. And that sort of looks like close up. And I call them convolutions. And... Oops, oops, oops. Is that any better? Yeah. There you go. So it was motorized in a motor that took about a minute and a half to turn around. I later learned that the average time a museum goer looks at a work of art is 30 seconds. So it was much too slow. Nobody knew it was even moving. Three seconds. Three seconds. It's gone down, see? The tension thing. But, you know, it's three seconds, but every once in a while you get somebody who just stops. Who just stops and just looks. So that's the moment you're waiting for. Oh, don't do this. There we go. This is called tabletop thing. It's another one of the abstract pieces. It's just Mylar from one of the mass produced things for packaging we had. And I had extras of this. The thing is, is that there's a glass ball in it. It's set on a nice table with the final version velvet underneath it. And there's a metal ball on it. So this is a prototype of some pieces where the holograms are embossed foil and they're set low. And then the metal balls are animated with magnets underneath. So they crease in and change what's going on. So some of the magnets are rotating and there's little planets with one magnet rotating around another magnet. So these ball bearings are pushing into the holographic grating. This piece is one of the other, it's called Life Fountains. It's again a metal shim and in front of it is a UV curable pool of the same dot matrix pattern. And that's what you live for, huh? And it goes like this. So when it was right, I also put a mirror underneath it to get as much light going on it as possible. There were two lights on it, the more the merrier with this piece. You've got an effect like that. You've got, you had a very interesting depth and jewel like quality. Here we go. This is another one of the convolutions. This is the very same pattern as you saw before, but two of them were put together and then they were twisted and bent. I have a video of that, but we'll skip that. Then we come to the series called Gilding the Lilies, which is a bunch of religious imagery that I revisited holographically. This is the dancing Shiva, Nataraj, which represents both the creation and simultaneous destruction of the universe. I won't go into the symbology. You can read about it. It's a very fearsome image. It's both friendly and fierce. The hand pointing down means do not fear, but at the same time everything is being destroyed as well as created. No two things brought together in time and space in the manifest universe will remain together in time and space. It had 18 different slits. It was actually shot quite a while ago. There are 18 different rainbow slits. It's silver halide. This is the museum version as opposed to the kind that was published on Holosphere once upon a time. You get two complete color palettes. One of the color palettes is very warm and friendly. One is much more fearsome, reddish, and glowing. There's a far-field diffraction pattern that pops in front of the image. Many of the rainbow slits are split in half, so you get one color in one eye and one color in the other eye. You get retinal rivalry and you get color mixing in the brain. It's a very dense hologram. This is a passage from the Qur'an and it's this lovely calligraphy of that passage. The calligraphy nests the whole sentence together in one form. I should know the name of the person who did the calligraphy. I don't have it with me right now. The passage means to give without expecting a return. It was simply set in front of an HOE. That's the effect you got, similar to what many other artists have done with holographic optical elements. You get the lovely color combinations going on. The Star of David is also the Star of David and the symbol for the Hindu chakra of the heart. I didn't know it, but when I showed this to somebody, they said that the intersecting lines make it yet another chakra symbol. This Buddha comes from not too far away from here. It doesn't come from Japan or Korea. It comes from the British Museum where they allow you to bring cameras in. There's the hologram. The statue was generally of a matte finish, but the eyes were polished and shiny. The curators at the British Museum shone lights from the bottom so that when you see the eyes, these lights walk across. That little slit of the 3 quarter closed eyes was lit. It was very beautiful. I did one hologram of the whole statue and another hologram of just the eyes. There's the video. This is a UV curable. In this particular case, it's exposed and hung with just monofilament. It's not much more than a piece of paper. If you touched it, you would doll it and the hologram would disappear a little bit because there was no protection. As a matter of fact, you can see some of that there. I looked through this hologram to see a large photo mural which was just on the other side. This gives you an idea of what the other side of the room looked like. You can see there are standing pieces and hanging pieces and the mobiles. It also gives you an idea of what the butler space is like. It's perfect for holography. It has no ambient light coming in. It has a light lock as you come in. It has a drop ceiling where you can put diode lasers in at any point. Most of all, it has a staff that's very eager to help you and do whatever you want. They build walls and all kinds of things. This is the photo mural of the deep space scene from the Hubble telescope which is captured the first light of the universe. When the universe lit up, the universe did not emit light in the visible spectrum after the Big Bang. It took some time. These are the first stars that ever lit the universe. These are the proto-galaxies, three-quarters of the lifetime of the universe. It took the Hubble telescope, I don't know what it was, 300 photographs over a course of months aimed at the blackest part of space. They thought there would be nothing so they would get the stars the farthest away. That's what it looked like. That's what the most emptiest part of space looked like. These are the very first galaxies to light up. That photo mural was about five by six and it played a part in three holograms. You looked through the Buddha's eyes to see that mural. You looked through the hand. This is a shadowgram of a hand. Again, UV curable ink. The hand moved a bit. You would just see the fingers crossing over each other. You would be looking back through to deep space. This piece is in the center of the room. It is a... One more minute? I can't do it. That's it in white light. It's 24 inches. This is it with laser light. It's dimmed. It looked like below sea. The sea was above and it looked like you were under the sea. This was the version you saw the first and then you saw the other version later. That's a movie I've got. We won't look at. What is that? Okay. This is another piece, a laser viewable version of it. It's called Star Slush. We'll skip that. Come on. This is the last one I wanted to show you. This is Krishna on the papal leaf. Here we see the baby Krishna sucking his toe while floating on a leaf in the cosmic sea. In Hindu mythology, the image represents the moment when the universe dissolves back upon itself. The feet representing movement either into manifestation or back from manifestation or back into the unmanifest. It is reminiscent of the western symbol of the snake eating its tail. In the myth, it is the cosmic night. When the hologram is properly lit with green-yag laser, the effect is like moonlight upon still water. In one version, the hologram is lit with a combination of white light and yag light so that the Krishna itself is white and then the green of it all shows around. Actually, that particular version got wrecked in electroplating as it's want to happen. This version has the Krishna sticking out about 18 inches. Thank you very much. Thank you.
The following is a review of holographic art works spanning several different themes. Each theme could be an exhibition in its own right. In 2004 I staged an exhibit at the Butler Institute of American Art which included several pieces from each group. It was my intention to get some of these works out and into the public and see what they looked like before expanding on a given theme. Many of them had remained unframed and unviewed for years. Some were new. The themes addressed were; Nature Studies, Portraits, Abstracts and Gilded Lilies-religious art holographically visited.
10.5446/21278 (DOI)
Good morning, ladies and gentlemen. I would like to talk to you about advances in holographic replication with the structure I've called Aztec. I realize some of you here have had some acquaintance with Aztec, but this talk is directed to those who have never ever heard of it. So what do we have out there today? Basically two types, surface relief structures that are replicated mechanically, volume reflective structures that are replicated optically. Let's look at the larger group, which as you already heard is a predominant method of replicating holograms. Surface relief structures can be divided into rainbow holograms to fraction grading structures. Rainbow holograms, rainbow colors, no single colors, three-dimensional horizontal parallax only viewable with lighting in one direction. And these we know from these the familiar Visa Dove and the Mastercard Globe. Fraction grading patterns, also rainbow colors, no single colors, mostly 2D, also 2D and 3D. Very complex patterns viewable from many angles. And here are some examples taken from a recent book called Optical Document Security. How are the surface relief structures made? Well, I think we're all pretty much familiar with the general process, but let's just review it. The alfaxus lithupecnix configuration, reference and object light are on the same side of the recording surface. Made like this, object reference, interference fringes perpendicular to the surface. The recording medium is photo resist. So when the development takes place, you're left with a surface that looks like a sine wave. And to replicate it, you form a nickel master from the photo resist. From the nickel, you can press the pattern into plastic. And then the plastic is metallized. And that's what we normally see as an embossed hologram. What about volume structures? Behavior is totally different. Full parallax with a single color uses a denisee configuration. Object and reference beams are on the opposite sides. It's optical instead of mechanical replication. So here's how they're made. Reference and object on opposite sides. Interference fringes parallel to the surface. And reconstruction. Reference light reflects off of the semi-transparent layers, usually photographic emulsion, documented gelatin or photopolymer. You get coherent reconstruction. No surface relief here whatsoever. The only way this can be replicated is optically. And generally, it's like you have photopolymer reeled off across a cylinder, which has a master hologram on it. And you actually make holograms on the production line. This is a process that is inherently more expensive than embossing. And here's an example of what you see. Full parallax, single color. And we're looking above and below and to the right and to the left of this particular image. How do we improve upon conventional holograms? We also, we have a very strong hint in nature's surface relief volume reflection hologram, the Morpho butterfly. Here's the Morpho retinor. Beautiful blue iridescent color. There is no color pigment in this wing. Here's another species, the Morpho absoloni. A slightly different color. Also, no color pigment here. How does that color arise? Here, by the way, is just a display case of Morphos. It's ubiquitous in the Caribbean. This is from the Caribbean island of Aruba. We did micrographs of the wing structure. And as you can see, at low magnification, it looks like a linear grading. But as the magnification increases, you see there's a complexity to the wing structure. And if we look more closely at the wing structure, we can see that it has this Christmas tree-like structure with little parallel veins extending out on either side. These veins are spaced a halfway-length apart for the light that is seen in reflection. And white light comes in from the top surface. White light comes in here, reflects off of each of these veins coherently, and all the other colors are absorbed. Can this be duplicated in photoresist? Well, the answer is yes, with the following modifications. Replace the undercut structure with a step structure. Achieve brightness by coating the step structure with a highly-reflecting metal like aluminum. How do we make it? We combine the off-axis and volume-recording geometries. By the way, the name Aztec is an acronym for diazo-footer-resist technology. But it also refers to the final structure that we get when we do this recording. And the way we make it is to combine the two methods that we just saw. We make in the same photoresist medium the volume-grading, which gives parallel fringes. And we have the so-called opening-grading. I'll describe that term later, which is off-axis grading, which gives fringes that are perpendicular to the surface. So we get two sets of interference fringes perpendicular to each other. Sorry. I seem to be missing a slide. Well, I'm sorry. So the opening structure opens up the surface to developer edgint. And one way of doing this is with three coherent aims, producing a honeycomb structure. And then if we put the steps into the structure, we get what I'm calling a single-color photonic crystal. We get single colors in the zero order, and we get diffractive colors dispersed off at angles away from the normal. If we record in narrow millimeter, centimeter-wide strips with slightly different step heights, we get what I call a color stripe grading. And here's a spectral scan of several of those stripes. It's also possible to not only use three beams to record the opening-grading, but to use five beams. And we get a structure kind of like this. And if we magnify it at normal incidence, we see many parallel planes. And looking at it at 60 degrees, we see very well-defined terraces. And again, many well-defined terraces here as well. And we can also use seven beams. Incidentally, the use of even beams leads to objectionable moray patterns. The contrast ratio from the high to the low exposure area is proportional to the square of the number of beams. So in this case, 49 to 1. In the previous case, 25 to 1 with three beams, it's 9 to 1. And here's the one with seven beams. We can also have a cell with polydispersed liquid crystal introduced. And we can change the voltage with a polydispersed liquid crystal. The index of refraction changes linearly. And therefore, the color that's seen in reflection is changed. And we can also isolate certain regions of the top surface to be tuned to red, blue, or green. What about the Aztec hologram? In this case, we may find that the off-axis color spectra is objectionable. So we want to reduce that. So we have an object beam and two reference beams. One reference is on the same side. The second reference is on the opposite side. So with a diffraction efficiency, then the off-axis minimizes and the volume maximizes like shown here. The surface diffraction efficiency varies as a Bessel function. The volume efficiency varies as a hyperbolic function. But the steps have to be extremely well defined in order for that efficiency to be high. If the steps are only moderately defined, it drops. The steps are barely defined. It drops even more. When we make the hologram, we do it as a so-called symmetric construction where the interference fringe curves are actually curved. And the development leads to slight curves in the structure. But the structure still shows the characteristic steps. And here are some single color images made with varying step heights. The blue image of the step height of 121 nanometers. The image on the right with a wavelength of 600 nanometers in the step height of 182 nanometers. And generally speaking, the steps here are about anywhere from five to ten steps. So this is at least an order of magnitude larger than your typical embossed hologram. We can also make an asymmetric profile where the so-called opening fringes that come in at a very large angle. And we're left with an asymmetric configuration which actually leads to very high efficiencies. Here's a typical set of efficiency curves for a five-level structure. By tilting this from a plus one to minus one, we can actually get different colors as shown here. We can also, if we go as much as ten levels, we get an envelope with many different merit curves underneath. If many narrow residences are closely spaced and wavelength, a single level plays, forms an envelope, and the result is a kind of a rainbow full parallax hologram. And this curve is based on scalar theory, but for an overall pitch of, in this case, three microns, it follows the full vector theory very closely. And here is an example of a blazed hologram embossed into plastic seen from the top. We can even see there's a degree of undercutting here. And here's a side view. And again, I'm saying that your typical embossed hologram has a depth about equal to one of these steps. So you can see we've got anywhere from 10 to 14 steps here. The reason this looks somewhat random is because this is a diffuse image, and the overall pattern would have a random pattern for diffuse images. But because the steps are very well defined, the color is quite uniform. And this is the object from which those micrographs were made. And here we're looking above and below and to the right and to the left. So there is full parallax. The full parallax is maintained. The color, the single color is maintained. And these little circles here, these little circles and this cross are focused in the plane, but this larger cross in the background is a full two inches behind the surface. And we see that's still sharp focus and this is rather extraordinary, I think, for an embossed hologram. It's also possible to replace the aluminum with other materials, other dielectric materials of high index that are actually transparent in the visible. For example, titanium dioxide and zinc sulfide at very high indices of refraction and actually form a very visible image, but they are transparent. So as a hologram replica device, the Aztec structure first allows for full parallax single colors that can be mechanically replicated. Second, it allows for easily viewed and simpler images in full color. And third, it's more difficult to replicate because of the complexity of the profile. And finally, challenge for the future. We should be able to find some way of embossing undercut structures. In that case, there would be no metal or dielectric coating required. The holographic method is ideal for recording undercut structures. The real challenge would lie in somehow finding a way to replicate that. And I know from experience that trying to replicate undercut structures is extremely difficult, but if we had, and I'm assuming we have great advances in embossing materials, we had some embossing materials with high elasticity, a lot of nice features, but if we could do that, then we could indeed replicate the butterfly, the original butterfly structure, and we wouldn't need these metal coatings. So, so far we're doing okay with the metal coating as is. It's very similar to standard embossed holography. The main difference is it's a deeper structure. That's it. Thank you. Thank you, James. In this short time to bring so much information, and I have to thank you in first another question. Have you ever tried tried metalizing an actual morpho butterfly wing and trying embossing that? That's an interesting thought, Jeff, but the actual morpho butterfly wing is extremely fragile. If you just touch it, it just rubs off. You really can't even touch it. It's extremely fragile, yes. Is it known how the butterfly grows its wings? I mean, the biological process is known or not yet? Well, I've had some people ask me, why has it taken you so long to develop this? And I said, well, think how long it took God to evolve this magnificent structure. I'm assuming a couple of million years. Yeah. Have you ever looked into the rubber inner masters? There's a bunch of literature about these sort of rubber soft inner masters to create from the photo resist, and I wonder if you've ever looked into any of that. I've looked at many, many, many, maybe some of the later ones I haven't seen, but certainly that's anything like that. I'll be very happy to look at. Thank you very much. And again, I think that it's still a very hopeful technology.
Holograms that are predominantly in use today as replicable devices for display, security, or packaging can generally be divided into two categories: either surface relief rainbow holograms, which include three dimensional images and intricate grating patterns, or reflection type volume holograms. The Aztec structure is a special surface relief device that combines aspects of both of these types. Its fabrication by holographic means requires techniques of both surface and volume holograms, and thus it is technically more difficult to make than either separately. The structure is deeper than the standard surface relief hologram, and its profile has the characteristic of several well defined steps, such that, when viewed on edge, resemble a stepped pyramid. Thus, replication of the Aztec structure requires special high resolution techniques to faithfully record the submicron features of the stepped profile, and thus is more difficult to manufacture. The visual characteristics of the Aztec structure are similar to the volume hologram, in that single colors, rather than rainbow colors, can be viewed. Also, a combination of single colors can be encoded into a single master, yielding unique visual effects.
10.5446/21284 (DOI)
Yn wingw Meltr today arrow-house Tomnewdie ter Rel fellan south-wyrded hawddawr cofachend rained a parodwyr teunio'r edrych o chyszymdu felly nes laneerwyr ar fal ав tead y wineid o'r gennydd e' chciwch, efallai ar y cyd-baniaun yy teimlo ddechrau, i wneud tîm meanol sôr a'r ffairamedd melynid ar weth letter Beth mae'n gofrun yr gwleid y dryn sy'n cael ddarparau ar s�� fly dedicrwyddi구요 o barodwyr holl yЯw yw'na mor hyd hypernau o bleddynt yn dweud Ac it's basically a paper where I've tried to put down everything I can remember from the notes I took at the time of what we did at C3. Nigel Abraham was a very important part of that, I thought he might actually be here but I don't think he has turned up. Ac y coli ar ni gymו i llwyddeid free credu bod ni'n digwydd mai ym hal yn y ple macinidig envelope ond beth drwy'r cyhoi oilltio perty rwy'n cynllunio, sy'n hynny'n gornometh hwnnw telodar. Ac о Τrwyd yn ewablyw barraeth siŵr سyddiad a'r hygros terroristsboardolれ. Cyngor cyflawn snaf llwytho sy'n��ence enryw ychydig a cenderwyd hand reall old이야ld iddynennu dddwyaethall gwiydd之. Y felly'n rhoi'r thr Onden i ddweud i ceisio achTP hesitист Heligolt. Ond charge patchd rerel ar Nor 느낌이 playlist 5, See the Hollygrams yn C3, whether we should get the cat out of the bag. Manعتd i flwyreth unrhyadhm yn unig gan arno hynny, ac give phan nawr i Hanggfarr, ag forn yn siŵr yn ei funud am blaidio siŵr yn reall i wneud ymy быстрonu. A éch nhw gell mirrors dych chi hefyd, ac ydym yn gwelio rhai lle core fally fit arall i amu gyd adarus cyijuanafart. o'r hyn o'r cyfnodd, a'r hyn o'r cyfnodd ar y cyfnodd, a'r cyfnodd ar y cyfnodd, a'r cyfnodd yn ymgyrch. A'r cyfnodd yn ymgyrch. Yn ymgyrch, mae'r ysgol yn ymddi'r Gwein Blyth, ac mae'r ysgol yn Ymgyrch Cymru, ymddi'r llyfr, ac yn ymddi'r llyfr yn ymddi'r llyfr. Oedd yn drwy am i g-dweithio eich Llyfr o'r hydfodg ac yn ymwmp chi fod yn ddigon hon i bod Nurse yn coneadyd ar y Gymraeth fel fod yn gelio'r Llyfr. Roeddwn ni –wr hynny, fel sebyn bought y byw disponibail, fan på i rhai o'r bobl. Mae'r fath o honol•riad iPhone mae g Pattern, a'r holl Cyber Internet uniadau sg challengingau. Mae'n rofynnigom i werth yn e palm o gwael sy'n werth o fynd mwyn i gyllilol o f perdwydant, bunio'n pob yn ddigonol. o林. Gentvliau am ruined Secretary Mclew desperately at RaikerBA Cymru, because we know Terry Mae equity di rhywun o de能50. Dyna miclew underneath hepw fanio, mae gyda hyd entity arall raddlla myselfiowaith未 maerig i'r hollygraff. Mae hwnna wedi fel hy Корwg Ti 1960 yn medicker i'r ysg mentioning ac mae hwn wedi ddigon see bo contracting fyddion trainsio arna newydd yma ar wir Lead quedENDownG ac dwi'n bwysig yn hynny, i wych arlau'r hollogram ac yn ychydig. A while I was making that arrangement, one of our other clients who were buying our little silver halide holograms, Nigel at C3 said, oh, if you are making a rainbow hologram, you'll need me on board, and I can help you because you can make the laser transmission with me Diau i� bod maedd ein bod na'r bod이야 diodínniau â'r fan yARIN yn nadgylchiau sydd, ond siewyd i'w myllian fi ar y rod coresynol wedi ruinsol ym naelau, o'ch gwisio yn banylian o pobi ac canhyun arno'i ridei i gneudिau! Yn Myllian II hefyd, mae'n monitored diogel o er�on yn bufylscol yn cael cael glamorous cyd-aConiad arisea foiswyr bod awsedd cyd-aConion. Qo variants rydw i leidio yn pla s financial Gor musical a dithlodd mewn Lygodig views sy'n dylw i ll i Mansion i werthig felly, kunig a cyfansio cy Steam tua y ph��hwyharau actu Mae switched a lleio a models my dad ar y cyfrif meddal fel hwn. pan Lμαι Fa, цisodd a gwlein ddweud amserwyr oes iawn hamp Cynecymanol ac nhw i ddahliwrm rδήw i ddim yn viw ac ryd salad i ddim yn wir meddwl ar gyfer tPa neu roedd sut ar gyfer bod yn maixolhaeth i dystofaau bydd teimlo cwrs LCD TERF a'r sw introductions cyικήon. Y Fed gallai peidio invitediaid yng nghaltingsglodi本 Algment mae'n falch ymaิd dechreu ei ddal cannotogi a Liamfig ar ôl dailable sgwr a metodi'r altamade a gilydd narForm Dymalarınıyr Pan oored, ydyw'r ddylai Xiangadu Oedda Relnegett Dynour cy Au Baker glued yn ddÕ Searchwyr angen, But I came up with things like this silver hairl of a ship, which was one of the subjects we used in our trials to make silver hairl masters. We gave up, raised some money and then made a lab in London, and these stereotyphotors of Jonathan's record the day that the table came in.知w o'rmod i chi unwaith i gael bod digital Ardaeth Get consistently gyda Fy��고ad Fyghудol, mae'n anod ynghylch i gyda'r Baradur a rude gwirade rydych yn stove Cywel Llorie isolatys ydym yn ysgol y taibl. Ac mae'r studio, Jonathan, ar y cyfnod ar y ffordd yma. Mae'n gweithio'r innau felazer, oedd yna'n gweithio'r gweithio'r gweithio. Mae'r gweithio'r gweithio'r gweithio. Ydwynau'r gweithio'r gweithio'r gweithio'r gweithio. Mae'r gweithio'r gweithio'r gweithio'r gweithio'r gweithio'r gweithio.<|mi|><|transcribe|> GYNial ma kontro mewn ffoto...YO Prوں yr syniad cy Abdullah. fod yn ôl yn gychwyn gyno'n mynd i gyrtu ffordd rwyg am arferau phan ditidentre cyfforddiad pobl. Ond mynd i� tool seaweedol coll xenon d sociod. nid yw holl th drank i dysteulau merydd Dewaron Yna llhi rhaglchod yn Rythach Dyn Friends Gwch.yrwch i ddecymae gen i yn sgwm de all었습니다 o fusbydierung Und zu a mladysfiveoedd Llywodraeth ynIG Fel Llywydd ac Mae'r Hong conducting tynno petisiolio amogi o heddiw gytherrodd bydd blwlad hefyd ydyn ni...f 끝 y lab hwnna yn Lyfbra. A'u mor i convertio'r neu ddaethau yn uneasy에u oeryngysis cyommes Y Llywodraeth CEO.....aledd newyddio bry Cindu deinell publicly dgoc Cas conqu Digital BPC.....eg efallai bynnag pes Mull Cymukan yn Y Llywodraeth Cymun. Bedaeth neon a'r iechyd genais cyстиnal firf honnoPlease De knows.....Emel yn ysgrifen synthol, ac nafydd unig hefyd yn ei gael i'r gwneudiaid.....rhyw copio bydd hyny darlock hon. Roha W exercodиig y pas Benwywrprogramyn? Felly mae genglai eisiau gry yn cym 서 lua af dylai o'r pethaumpau nebfnol a byddai bethys i busydd a fyddon fel g driver awen nhw. Rhew i dweud yma sy'n cael eu fawr angen', galler bynnig sy'n cael eu henีyncol o honno bynnig i gestion teту ymhellishol bach o Lady A Zeroτg suspect a rhoi hysgr posiogio o hy TechnJin gamchearoes i'w felly yng 000 gaerunts gyda h 눌러걸ing awaveys, gwahanol portio do brôlo ond a Holy �mbedd releasedau hynny. Ac yna ein bod pobl yn ymwneud ysgrif Liberal, le maen nhw yw y 8 ynod chi'n iawn i. Mae'r uwch yn tymes iddyn nhw nawr, yn cyhoedd, roedd angen historygan, o fe yfriedol Fpwllgell. was from a pulse master, the H1, the model was too fragile, beautiful delicate model they provide us with, we couldn't shoot it with a continuous wave, so it was done with a pulse laser with John Webster. And I think that was probably one of the first, if not the first hologram to be mastered using a pulse, apart from the topound was it light bulb. We saw 2D3D and we very quickly did our own. We saw ET and we within weeks or not days we were churning out five colour 2D3Ds or multi level. The 25 there is actually a 3D hologram with stacked moire in the background. Our biggest commission was without doubt American Greetings, was a series of 58 different images. Unfortunately for Graham, the person commissioning it was Graham Ridout which led to a litigation that's all part of the history. We did 3D dual channels and we even tried optical recombining by using coins and bits of shim to align the plate. As the time progressed we were stuck into doing security work and C3 was probably doing more security work than anything else by the time that Nigel left to join applied. One of the things that we did was complete first was to get silver records in vinyl. It wasn't the first time that it had been done in a record in black vinyl. Mike Foster had put grating images but by putting a label on the press at the moment of impact, or just before the moment of impact, we were able to get the music and the hologram in the same. Basically all the details are in the paper so I won't go into the myopics of the size of plates and the beam ratios and things but I put it all in there. The main innovation was the chemistry. I still had a library card to Cambridge University library so I could look at all the Russian papers which the Americans had slavishly translated and Nigel had independently done a lot of research. I think probably the chemistry was as good as it could be. We didn't just keep it locked down, we were continually trying to get it bigger and brighter. These are things we used most of the time that we were in business. All the details are in the paper. The key was making an H3 because we found by trying with the ship hologram that if you tried to metalize, vacuum metalize emotion, it just sort of orange peeled and went incredibly dim. You had to make an intermediate and we used a visible light curing resin rather than a two-part resin. The problem with two-part resins is as you mix them together you get air bubbles. It's almost inevitable. Some of those air bubbles are so small you don't notice them but they'll end up being pinholes in the final shim, little black voids which when you see it in the shim you would certainly notice it. We were keen to use something that was a single solution, keen not to flood the place with UV. We eventually came across a solution which was provided by Scott Bader, which was a visible light cure single resin and a bank of spotlights basically. It was illuminating a disc which was a 14-inch disc standard size used in the record industry but as a perspex disc instead of glass. It was my job to cast these things and expose them and clean them up before we sent them to INCO for electroforming. Which gave us a huge 14-inch size metal master. We were keen to keep what we were doing secret so most of the time we kept ourselves to 6x6 because we figured it would be obvious if we used the full 10 inches that we weren't using photoresist. Although the brightness was good, the clarity was good, it probably would never be exactly as good as the good saturation and quality that you can get from photoresist. Which is why I say it's good for artists who might not have a choice but to use low cost materials. It does give you an avenue for using red laser instead of expensive blue or green ones and it gives you the chance to either make limited editions of castings or limited editions of shims. The longevity of a nickel plate is much greater than any emulsion based plate and one could make a single complicated piece which you then get an electroformer to run 50 copies of. I think this is an opportunity which as far as I know with the exception of Harriet who did a shim edition and I hear Paula thinking of doing a shim edition it hasn't been greatly used and I think it could be in the future much better used to use silver halide relief holograms to make limited editions. That's the conclusion really that we were in business through to 1989. I did in the end turn to photoresist and gave up on silver halide and I think Nigel when he was at applied struggled for a bit to continue using the system we were using before deciding that photoresist was a way to go. But these days people who are shooting photos are doing smaller and smaller images for security. The six by six standard has now come down to three by three as a large image these days. So it's a very different world if this was again revival by using the the the Fuji materials. I think it could again have a new promise and that's why I'm boring you with this paper today because I could see no point in in in providing this information when there wasn't an emulsion to use when when Agfa wasn't supplying. Thank you. Hi Jeff Oedner. Did you ever measure the how small the structures could be and compared to photoresist or just how small the structures were or what they looked like whether it was science soil or the way it's sharp sides the aspect angles or put under a sem or something like that. Yes we we it's quite expensive to do SCM's of work but we we did get some done especially in the early days to see which direction we should be going with the chemistry. I think our feelings were that the result by I in the embossed piece was what counted that it was our goal to produce something of the commercial standard that was as close to photoresist or better. I mean there's some bad hologram shot in photoresist. We were continually trying to get up to the bar. The standard was coming out of light impressions from California at the time. So if you like what our studies were concerned with it was what it looked like in the embossed as opposed to a scientific analysis. We did some of that but it was less interesting to know what shape it was if it was dim. I think it was bright. No it was a cast resin and the Scott beta product that we're using isn't probably available these days but I was speaking to Craig and he was assuring me that there's a lot of UV resins in the the year since we sold C3 probably that side of the technology has improved and I haven't even looked at it. But you need just something which will do two things. One it will intimately take up the the profile of the hologram and two it will survive being electroformed. So as long as it meets those two criteria you can use any. Did you ever consider making your H2 directly in photoresist? Yes that's what we then turned our lives to doing. But only once you've got a budget to buy a much more expensive lab than perhaps most artists could afford. How many strikes can be given when they're pretty possible to be bigger in a classic skin? Can you say that again? You know your plastic skin or your cast resin skin? They were used once they were sent for electroforming and then we'd work off the grandfather. So you didn't use that as a strike? The process that we were using I had to clean them up at the edges if I wanted to make an addition and then get them sort of vacuum formed, vacuum coated with aluminium I think that would have been perfectly doable. The ones that I've still got haven't degraded as far as I can tell. There would be an upper limit to how many pulls you could get but probably it would be because you broke the plate or scratched it or something as opposed to. Silicon mould is that? Would that survive electroforming? No but if you want to... Is this working? Not that I can help. Anyway the silicone mould making material, that works too. If you want to pull multiples off the gelatin. Would that survive vacuum coating with aluminium? You would then use that to pull the resin copies off. I've used a number of them that work just fine. A number of mould making. There's a popular in the States, there's a McMaster car. They have an unbranded material that's kind of blue in colour and it's relatively inexpensive. Is that liquid when it's... It's liquid. You need a vacuum pump and a chamber to pull it. A vacuum frame. Right. I think the beauty... If I was going to research this further I'd be looking for a resin which you could pour on under glass and then expose ideally in sunlight. Is there a vibration system? No. Scott Bader told us that there was a catalistic solution we could use to accelerate the process. But because we didn't want a two part solution we never used it. I just leave the things to cure with white light for an hour. Very quick with my account words. If you want to pull multiple plastic copies, resin and polymer, you can put a gel in the full of multiple plastic copies of the product. But that would be a white relief holder. You can put a little bit of addition in the white in the solution. Right. You can make a addition in silicon. In silicon. Yes. I'm throwing this to you to then come back next time and tell me what's good with the Fuji emulsion and what UV curing or other resins or compounds are good to make the H3. So I want feedback. Thank you.
It was June 1981, when I was a Director of Hollusions Ltd, that I saw my first embossed hologram, “Skull & Rocks” by John Caufman.1 Shortly afterwards, I set up a meeting with technicians at the PAT Centre in Royston, near Cambridge, to make “Pierrot with Ball”, one of the first photo-resist rainbow holograms ever made in Europe. Nigel Abraham, of SEE 3 Holograms Ltd assisted me in making the laser-transmission H1 used for the “Pierrot with Ball” and he subsequently joined me to make the H2 Rainbow hologram master at the PAT Centre. Throughout the rest of 1981 he and I were both working independently to research surface-relief holography, with a view to setting up a facility in the UK. In 1982 I was invited by Nigel and Jonathan Ross to join SEE 3 Holograms, so that we could combine our findings and establish a system for making embossing masters.
10.5446/21289 (DOI)
Good morning everybody. Wait a minute. I will open this. Okay. Thanks for coming to my presentation. My name is Rocio García Robles. I'm coming from the University of Seville at the South of Spain. I'm currently working as an assistant teacher at the university on computer science, but I started also fine arts and I'm currently doing my PhD thesis on the history of holography art. So that's the reason why I'm here. I hope you will find interest in my research. So my research... Okay. In my presentation, I will explain some of the issues that I have addressed in my paper, but because I have only 20 minutes and also, you know, there is a very, very famous proverb that says that the part reflects the whole. I have decided to select only some topics and I will go into details on this part. But I already made the presentation of all the slides for the whole paper, although I only focus on the first one. What we will see is... I will try to explain to you some of the similarities that I found between the works of arts and art projects and their taking by artists using holography and the main features of some relevant contemporary trends. I want to clarify that I'm not trying to classify anybody or any work, but my aim is to find some clues in order to contextualize what your work in a contemporary art framework. Okay. I will analyze those similarities from four different approaches. Lexical, syntactic, semantic, and pragmatic. Lexical is related to the art techniques and also to the information channel, while syntactic is more related to the network relationships between the lexical elements. The semantic is related to the possible meanings and interpretations. Finally, pragmatic is related to the influence of art on the social context. So the first two approaches, lexical and syntactic, are more related to the morphological issues. I mean to the visual appearance of the artworks. Sorry. I have selected some contemporary art tendencies, the ones I consider the most relevant in relation to the work made by artists using holography. And in fact, there are three groups of tendencies that I will analyze. Neofigurative tendencies that include pop art and sapper realimes or photo realimes. Neoconcrete and technological tendencies as up art, kinetic, and light art. And also conceptual art tendencies from the point of view of the postmodernist aesthetic and also taking into account two tendencies in the conceptual art trend, the linguistic and empirical media. So first tendency I analyze is the pop art tendency. And I have selected these pictures to try to explain to you that there are some syntactic and lexical similarities between the visual aesthetic of both works. You can recognize this is a very famous work by Andy Warhol. And a work, holography work by John Kaufman. And in both of them the use of high hue values in terms of the colors make them close from the visual aesthetic point of view. But in the case of holography, we have also the potential of exploring the dynamic interaction of colors. From the semantic point of view, it is more difficult to find concidence between pop art and the work I have been studying. Because I haven't found, maybe you can help me, I haven't found any hologram of the object that has been made with the purpose of trying to transform it in a social icon as it was the purpose and the goal of many pop artists such as Andy Warhol. But it is easier to find artists using holography who have used objects for conceptual purposes as we will see later on. So the second tendency I will spend to you is the supper, realings or photorealings. On one hand from lexical and syntactic perspective, holography as in photographic media has been often used to reproduce the actual appearance of the recorded scene. But on the other hand, most of the works of art of the supper, realings, share with the holography images the paradox of looking so real although they are not. So they are tricky to the human sensory capacity concerning sight and tactile sense mainly. From the semantic point of view, it is easier to find, it is also easy to find a coincidence because in supper realings, it is very usual to find social themes represented in the works. And there are many examples of holographic works with feminist, homosexual, ecological and many other social perspectives. So I have included two images, one of the Margaret Bayon cosmetic series of the female series and also an installation related to the ecological theme by Philip Boyson, it is called Gaia. The third group of tendencies I will spend to you is the op art, kinetic and light art. First of all I want to say that from an historical point of view, it is possible to find references on the literature with connect or holographic works with those tendencies. And in fact I will tell you just three of them. Marc-Chan Fith is a professor, very famous in the Spanish literature and he categorized holography as a type of lighting environment. In his book, The Arte Conceptuala, La Larte del Concepto. Another example is found in the book The Art of the Electronic Age, written by Frank Popper. The second chapter is devoted to what is called Laser and Holographic Art. Finally, the most recent literature, really one literature I have found connecting these tendencies with the holographic work is the Holographic Network. It is a book edited by Dieter Jung and in this book this synergy is also analyzed. So I want also to say that in fact it is not easy to find literature on history of art that take into account holographic works. So these three are quite relevant ones. So on one hand one of the immateriality can be considered as a linking feature in relation to the works, the holographic works and also some artworks well known of these tendencies. But indeed the immateriality, this characteristic justifies that some works are described as holographic while they are not. And we have two examples here. Paul Freelanders is this pictures about a kinetic sculpture. He uses a string rotating and projected images. And he got similar aesthetic to holographic images. And Hiro Yamagata, this is a picture of one of his installations. Hiro Yamagata, sometimes he uses holographic surface but he normally doesn't use the holographic in the sense we are talking here. I have this is something, this reflection has an important consequence because from the pragmatic point of view the society and also we can see it in the science fiction literature, everything that looks a vanes-ten and three dimensional is called holographic. So this is a contribution also of this media, of this aesthetic to the history of art. Okay. Moreover, there are artists of op-art, kinetic and light art such as Lubith, building from Malina, Bruce Nauman, Dan Flavin that can be considered as predecessor who has inspired artists using holographic media. And even some of them, for example, Bruce Nauman, we have a picture in this slide, who had been using holographic media for experimental purposes. Those works of art makes us think, I mean the works of art of this tendency make us think about another common syntactic issue which is the need of good lighting configuration as a key factor for getting the right visualization. Also an example of a explicit link between op-art, kinetic art and some work made using holography is a work by Jacob Agam, a kinetic sculpture with a hologram at the top. In this case, Agam has been serving the aesthetic exploitation of what Trumpopers call the perceptual essence of the image. So another interesting commonality is the preference for the geometric subjects by artists of these tendencies and holographic work. Some of the most outstanding artists who have been using holography for this type of research, for example, Rudy Beckhout, Dieter Jung, or Marianne Decozette, we have a picture also. There are also some syntactic similarities between works of this tendency and holographic works. In fact, in the literature, these work of art are described according to four types of movements, virtual chromatic movement, virtual interference movement, virtual temporal movement and real temporal movement. So let's analyze all of them. The virtual chromatic movement is a characteristic of the chromatic and illustration tendency of the USA op-art trend. One example is Joseph Albert's work on the interaction of color. So the same type of chromatic dynamics can be explored using holography. In fact, one example is, for example, Dieter Jung's work. You can see also in the slides. Individual perception is conditioned in the case of holographic work by so many factors such as this tank, height, lighting angle, that it could be said that this virtual chromatic movement turns into a completely subjected personal experience. And this is also a contribution of the holographic media. The virtual interference movement, the second type of movement, is representative of the European op-art tendency. And a good example of this type of movement can be found in Victor Vasarelli or Jean-Pierre Iveral's work. In those types of work, virtual movement is devoted to the perception of the patterns in which the smallest single elements produce interference perception phenomena, depending on the density of the proximity of the elements. So we can find some experimental work and then taken by artists such as Sinyaki Megidistain. Yesterday I discovered one good example about this in his work, in the exhibition. And also, for example, Dieter Jung is a good example. Dieter Jung also uses patterns as a conceptual expression of holings and networking. The third type of movement is the virtual temporal movement. And this characteristic of the human visual perception is related to the retina persistence principle. And it is the base for the cinematographic effects. In the case of the holograms, all the frames are safe normally in the same plate by virtue of the multishannel property, as you already know. So this aesthetic resource has been widely used by the artist community using holography. Some of the stereotypes we saw yesterday are a good example. I included a video, but I decided to keep it away. And the last movement is the real temporal movement, which is a main feature of the kinetic and light art tendencies. It is produced by the real movement of the physical objects or light bulbs or lasers. And in the case of holography, it can be produced by the movement of the holograms. A good example is this mobile. It's a work by Sepsuko Ishii. Although it is a resource that has been used, it is more common to find holography works in which the artists leave spectators to move themselves around the works, being then this concept more closely related to the previous type of movement, the virtual temporal movement. Okay. So in these tendencies, many artists have been working with the domain of the space and the time domain. And this is something that it is also explored using the holography media. So a good example, we can find some examples, for example, in Poladauson, Bayon and Gamboza or Gustav Hamos works. And we will discuss it later on. After reviewing all these artworks, it is possible to discern two main tendencies inside holography, abstract and realistic imaging holography. Therefore, from the pragmatic point of view, we can conclude that holography is the only light media by means of which the figurative imaging based purely on light is being explored. And this fact could be considered an original contribution of holography to the history of visual art. One proof of that is the fact I was explaining before. The term holography is used for any kind of vanishing three-dimensional imaging. And also, well, that's the conclusion. The abstract holography tendency is closer related to the morphological features of the representative artworks of all of our kinetic and light art. I have included in the slide one example of the realistic trends and of the abstract trends. One by Martin Richardson and the other by Rudy Bergaut. I have more information in my paper about that. Okay. So let's continue with the last group of tendencies, the conceptual art tendency. An interesting link between conceptual art features and the holographic aesthetic is, again, the immateriality. Because in fact, some theoreticians like Vicente Carreton or Peter Seck had used this argument for justifying the potentiality of holography media in relation to the post-modern aesthetic. Nevertheless, there are some issues that must be considered concerning the two main tendencies inside the conceptual art. There are two main tendencies, the linguistic tendency and the empirical media tendency. So from the linguistic tendency perspective, the hologram couldn't be described as purely material because due to the material essence of the hologram of the player of the film, but even on the case of the most radical linguistic tendency, the materialization is impossible because even the written or the spoken words are cultural objects, are perceptive and have an inputted meaning. So that's the main reason for arguing that the convenience of describing the holographic conceptual words as closely related to the empirical media tendency, far from the tautological reflection of the linguistic tendency. The empirical media tendency does not advocate on the complete dematerialization of the work, but it replaces it by refusing the traditional physicality of the object in favor of a more immaterialized form of energy. And probably you will agree with me that there is no more other dematerialized energy that delights itself, which is the essence of the holographic image. So therefore, materiality is, to conclude, materiality is reduced to the support in the case of the holographic media and it is the self-reference of light per se what makes holography, and especially suitable media for conceptual artworks. Also, the empirical media tendency is characterized by interdisciplinarity and intermediality. And this is something that is very usual also in holographic works. We have some examples here by Doris Mila, Pascal Guchette, Sally Weber and Melissa Crenshaw. Conceptual works are also characterized because it demands new means of elaboration. Photography, cinema, video and computer are the media more often used, more often used, but holography is a media as good as any other for serving the spatial and temporal domains and the conceptual research. Okay, so. In this slide, I have included some examples of conceptual works, holographic conceptual works, but after what I said before, I think the strongest connection between conceptual art and the work by some artists who have been using holography is the art project itself. We have good examples in these works. I will not explain in detail, but in my paper I explained something about CAC work, on all of poetry and Bayesian and Gamble works on Vivio Manzi. It is also remarkable the use of daily life objects in their work. As you can see here, in the contextualization exercise, which seems to be very contemporary, but at the same time historically connected to the shampe ready-made or even to Andy Warhol's Breaker books, for example. So concerning post-modern aesthetic, it is possible to find examples of the use of appropriateness, fragmentation and the construction, which are characteristics of the post-modern aesthetic in holographic work. In my paper, I explained, I compare, for example, the contextualization exercise made by Troy Browntuch in this piece of the third Reich, or many in the city by Robert Longo. The appropriateness exercise made by Richard Prince in his Marvoros series. I compared to Patrick Boyce and Gustave Hamos and also David Pisanelli work, because they use real-life images and they de-contextualize them to input new meaning. Finally, as the last example on appropriateness and the construction, I compare, for example, Louis Lowler's arrangement artwork with Paolo Dawson's work to the absent friends. Lowler made pictures of artworks placed at the indoors of museums and other less glamorous locations, involving the spectator in the reflection of the museum strategies, which enable or disable those artworks to be idolized by the general public. On Dawson's case, the scent of memory, virtually lodged in a specific architecture, happens very frequently in her work. The bar is in some ways an attempt to visualize the relationship between memory, object, present, and past in a visual way. These are the authors' work. Dawson and Lowler are both trying to confront the spectator with familiar situations, looking for the spectator's further reflections. Dawson uses appropriateness, fragmentation, and simulation to enhance her conceptual purposes. Finally, the conclusion I have made is a short review about some outstanding features of the most related contemporary art tendencies. In the rest of my paper, I analyze also, I compare holography media with other media, and also I contextualize holography in the digital age framework. So my purpose has been to document the artist's aesthetic reflections about their artwork, as well as to contextualize the possibilities of holography in relation to the dynamic nature of contemporary art. Okay, thanks a lot. Yes, one more thing. I want to tell you that you can't imagine how much I enjoyed yesterday, even I didn't eat anything, because I had the chance to see all these holograms that I didn't see, I only saw them on internet. So I want to thank you, Jonathan Rose, Martin Richardson, and all the organizers and the artists for giving me these opportunities. And also I want to take the chance to ask the artists for further collaboration in order to get a better picture of your actual motivations and artistic approaches. I don't want to disturb you too much, so I will give you my card, and if you allow me, I will write you an email and ask you further about your work, because I'm very interested. I have to thank you also publicly to Dr. Benjoran, because he gave me the chance to make a practical work in the University of Lund last year. So thanks a lot. Any questions? Okay, that was fantastic. Thank you so much, Rockett.
Art movements such as Op-Art, Kinetic Art, Light Art and Conceptual Art have been predecessors as well as contemporary tendencies to the use of holography as an art media. Apart from the obvious historical nexus, there are relevant technical, formal and significant similarities and differences in relation to some of the works created by relevant artists as well as to the main features of those art tendencies. This paper explains the conclusions on a study from a triple semiotic approach: lexical, semantic and pragmatic. On the other hand, holography is an art media, such as painting, sculpture, printmaking, photography, cinema, etc. Like all of these, holography offers some essential features that are unique and representative of the possibilities of the media itself. In this paper we explore those features as well as some key issues in relation to the aesthetics of holography media. An ontological review is undertaken for contextualizing holography according to the Perception Aesthetic, the Participative Aesthetic and the Generative Aesthetic related concepts. Finally, in order to contextualize holography in relation to the most contemporary art tendencies, we explore the relationship between holography and digital media. As photography forced painting to redefine itself in the nineteenth century, nowadays computers have a similar effect on both traditional and other emerging artistic media such as holography. According to experts in the field, some of the most outstanding characteristics of digital media are: Immateriality, Reproducibility, Time Essence, Interactivity and Non-Linearity. This paper explores the possibilities offered by holography in that emerging digital framework.
10.5446/21290 (DOI)
Alright, I just put this paper into, so people know I'm not just a microphone guy here. So I hope there's something of interest for you. My name's Kave Bazagan. Oh. It's okay. See, on the video, they don't know. I spoiled it now, haven't I? So I might do that again. That felt quite good. Okay, sit down now. Holograph.org is my site. My day job is River Valley Technologies, and we're in publishing and typesetting. Quick plug for Holograph.org, it's non-profit, it's just a place for holographers to put any contributions, any papers, that you think will be of interest to other holographers. So whatever you have, if you think it's of interest, it doesn't have to be a paper, it can be anything that other people here might use. If it is useful, I'll make a decision. We will clean it up, we will edit it, we'll typeset it, put it online if it likes the conference. So we'd like to have any contributions. Incidentally, I think you all know Pearl is doing a fantastic blog of this conference, and it's online, everyone's following it. And if you go to Holograph.org, there's a link straight to that. Yeah, well done Pearl. Okay, Holopov is a program I've been talking about for predicting an image looks. That's just a joke. I'm sorry, he's upstaging me. I should have changed that, shouldn't I? What's the problem? The problem I'm looking at is, if you make a hologram with one wavelength, one set of geometrical parameters, then you replay it, say you made it with green here, replay it with red, the image is in a different position, it's distorted, etc. Now we all know that, but I didn't find a good program, an easy program to show me, to predict, to pre-visualize what I'm going to get. Well, suppose I want to have a rectangle at a particular position, what do I have to do in the recording stage to get what I want in the reconstruction? There are three important things to realize when we make a hologram and then reconstruct it with different parameters. One is what I call the moving pupil. Now, this is very unique in holography. This is a hologram we've made with green, green light, and we are reconstructing now with a different wavelength. I made an animation here, going between blue from blue to red, and as you can see, obviously the image changes position. You can see that, okay there. As we know, there's geometrical distortion. The thing to notice is that the eye for any wavelength is looking through a different part of the hologram, it's not through the same part. Now generally in optical systems, the pupil or the center of where the eye looks through the optical system is fixed. So if you have a ray tracing program, say for lens design, they assume that your eye is in this position looking through the center, then there are all kinds of equations. Now in holography, it's more difficult because the pupil is moving depending on where you are. You are free to move. That's one thing. Second thing is that in general, the image is distorted. Distorted means distorted. The point is, it's not at the same position as it was before. So there's distortion, there's the moving pupil, and there's aberration. You probably heard aberration, but really all aberration means is that if you look at an image from different positions, it doesn't seem to be in the same position from different viewing angles. This is when you have generally, when you have the sort of usual swing that you have in an image. That is because of aberrations, okay. So this man called Champagne in 1967 wrote a paper, and it was the first paper that allowed the first set of equations that allowed you to work out from the recording parameters, giving it, say, the wavelength, the distance from the reference beam, object, etc. Then you go to the reconstruction parameters, it would tell you exactly where the image would be, what the aberration would be at that point, so how much swing there would be around that point as you move your head around. That is a, fundamentally, these are very, very simple, elegant equations. So it actually predicts the position and the aberrations, but it assumes that the pupil, just like any optical system, it assumes that, well, you're looking through the center of the plate. This is the problem. So what, that's a sort of fundamental problem. What we have to do, I won't go into the details, but you know that you're not always looking, you won't be looking through the center of the hologram. So what you have to do is to make a guess of where that pupil might be and use Newton's approximation to get to the actual position. But that works quite well, about three or four iterations, and you know exactly where you're looking through. So we need a program to plug in these equations and then show us what we get. Which program do we use? So the things we have to do, one is calculate the image position from all the parameters. Secondly, we need to display it graphically. So I spend a long time thinking which program do I use, and I'm not really a programmer, but I look at the different programs, and then, of course, you get the output set of numbers, you put it somewhere else to get the graphics. I really want to get an intuitive feel of what the image is going to look like before I make it. I found really a fantastic program called Povrey. That really deserves applause. Povrey, if you haven't heard of Povrey, please go to Povrey.org. Anyone interested in graphics, playing around? If you want to learn programming, I believe this is probably the best program to learn. So I'll give you a very quick look. So it's a 3D rendering program. The difference is normally you have things like 3D Studio Max and what have you. I don't really know this, but the modeling is interactive. You draw a sphere by hand, you draw a rectangle, you look at different places, you put the light here, etc. Interactively, then you get an image. The difference with Povrey is that it's actually you have to write the text. It's a programming language. You write the program, press a button, and it gives you the output. Now most computer 3D people, they just ignore that. They think, oh, this is some geeky thing because they want to work interactively. But for certain things, you can't work interactively. And for here, I find that it's really a perfect thing for my application here. And it's a full programming language. It's a renderer, but it's a full programming language too with a very nice clean syntax. How does a renderer work? Basically, you have a scene. You have to define a scene. You say, I have a sphere here. I have a cone here, etc. The surface color is this or that. You have to have a light source, otherwise you won't see anything. You need to tell it where your camera is. These are three things you need to create a scene. In a physical sense, create a scene, not in a bar or anything like that. I'll give you a quick, very quick one minute demo. Don't be afraid of this code. If you're not programmers, just look at what's happening. We are declaring a camera location. It's an XYZ system, 0, 0, minus 20. I don't know which is which, but just see how it works. Declaring a sphere location is at the origin, 0, 0, 0. Sphere radius, 0.5. The texture has a pigment, which color wants it. Even the color is red, green, blue. It goes from 0 to 1. You've got red, green, blue, so that's going to be yellow. Light source is at this point. Color of the light source is 111, which is white. Camera location is here, etc. Angle of the camera, is it zooming or not? That's the camera, that's the light source. We have a sphere, so a very simple scene. We render this. That's what we get. Okay? I can go back here now. I want to change the color instead of red, green. Just make it red, so that's 0. That becomes red. I don't like the camera. I want to enlarge it a bit, make it 10 degrees, so it's a narrow angle. Okay? There's lots. You can add a light on the other side. You see, at the moment, the light is coming from here. I can declare a sphere, copy that. I have another sphere, text. I don't want to go too far with this, because I'll get myself into trouble. Say this is yellow. Light source is that. This is the same position. Let's have another sphere. This time, with the second texture, not the sphere location we had before, but sphere location, you can just add, say, 3 in the x direction. 0, 0. I hope this works. Things are going to go wrong here. It has worked, but you can't see it, because I'm not sure what's going on. Believe me, I'm not... Stop, stop. Yeah, I've got myself into trouble already, right? Why? Zoom out. I thought you said it's enough. I saw why. Why is she telling me? Why? Yeah, well, you know, and you can see I'm rehearsed this, okay? Um, but let's forget that. The sphere radius is now, instead of 0.5, we'll make it 2, so that should become a bigger... Okay? That's probably... You can see it works. Cylinders, you say point from this point to that point, radius this, color that, etc. Now, just to show you what amazing thing people are doing, in case you think I'm clever, we'll look at some seriously. This is what people are doing with Pov rate. This is line by line. You've got incredible results. Actually, this I did. This is not... I wrote some subroutines to draw optics. Now, you can see that's a lens made from two spheres. You cut two spheres, put them together. You say it's made of glass, refractive index this. That's a mirror reflectivity one. One of them is a concave mirror, etc., etc. And the light, you can say, it's sort of translucent, but not quite. And once you've done that, you just write a line saying, I'll draw my laser from here to there. But this is what the clever guys are doing. That is text. Right? It really is photorealistic. This is a very, very famous guy, Jules Thrand. Same guy. Sorry. Oh, you're not. That was written as Pov rate? Yeah. This is Pov rate. So, I just wanted to quickly show that. If you go to Pov rate.org and you want animation, you can see a list of animation. This is text. There's no interactivity. Now, the beauty of text is, if you want to change something, you just change one parameter, it changes everything. Right? And you can automate generation, etc. Okay? So, that's Pov rate. That's how great Pov rate. So, I thought, I started with this, and in the end, I started working quite well. I've called this Holopov, for obvious reasons. The other thing I've done, actually, I've put a sort of front end. This is a thing called revolution. If anyone's familiar with hypercard, in the early days, the wonderful program called hypercard, I'm sure you are. The worst thing Apple ever did was to kill hypercard. They should not be forgiven for it. It really was the first scripting language anywhere. I've been looking for a hypercard replacement for 20 years, and a few years ago, this came out. It's called revolution. It's cross-platform, etc. And so, what I've done is I'll have a quick look at the main file. You can see that this is, you know, declare object grid true, image crew. It's a huge program of calling in things. But you don't want to edit the text file. What I've done is I've put a sort of this graphical front end here, which writes a configuration file of all your object distance, object angle, etc. So you don't have to look at the code. So I'll give you a demo. Write. Take breath. Okay, let's run this. See what we get. This is running this. It's a batch unix process in the background, but you just see it doesn't do anything. What have we got here? We have an object. Let's look at the object. Now these things, I've divided these, so you can click on this. So we have an object here that is two units away. This is our object. This green point, yeah? Two units away, angle above horizon, etc. We just leave it as that. The size of the object is 0, 0, 0, so it's just a point. Recording geometry. Reference beam is 100 units away, 45 degrees above horizon. Wavelength is 550. Reconstruction, 135 degrees this time, and it's 600. So that's what we've got. So if we change our object to something more interesting now, instead of if I put one here, one and one, we get a cube. Now every time I put something, it'll just start from scratch. It's in immediate mode. Immediate means don't wait for me, just do it. So that's here. You've got, the reason you've got those dots is that I've said in, if we go to what to draw, I've said put the object as points. So it's got these green points. Okay. But no object grid, but I want my image as point. You could have object grid instead of points. Okay, so it'll do the same thing, then you got grid and grid. It depends what you want to look at. Okay. So that's green and that's what you get here. We can say, well let's go to the observer. Suppose the observer is at the moment, it's minus 5. There's this evil looking eye here, one of the two eyes. And it's at angle above horizon is 0, angle to side is 0. I put that minus 45. So go 45 to the side. It'll show me what it looks like from there. Okay. So that's the grid. Okay. What else can we do? In the object, I can say at the moment there's a grid. So it's joining these points up. The grid separation is one. So you get just the cube. If I say 0.5, yeah, then it'll put an extra set of grids in between. So you can see more clearly what's going on. What else can we do here? Now H2. I call H2 the hologram that's reconstructing. So even if there's no H1, it actually works this way. Even if it's a single hologram, the final hologram is called H2. Let's assume now this is I'm just shooting from the hip here as it were. Let's, the distance, let's make it 0. So the average distance of the object from the plate is 0. That means it's going to be like an image plane. So you can see that that's now I've put this on low resolution by the way because so that it's quick. But camera settings, you can choose what resolution you want. So if that were 800, it just takes a bit longer to do. I'll just do it once and then I'll let it go. Don't stop. What does the dispersion or is that accurate? Oh, dispersion, hang on, reconstruction, yeah. This one here? Yes, it's falling off the side. This should say dispersion compensation. That was a test and you passed. Thank you. Okay. I'll put this, I'll show you what that does. I need reminders like that so I can decide what to do next. Right. Let's click on dispersion compensation. What that does, it says right. I don't care what reconstruction angle you told me. I'm going to change the angle so that there's no dispersion of the center. So it's move the red up so that the you see, so that you're looking so at any point that was on axis, it's on axis again. Okay. Yeah, automatically overrides this value 35. Yeah, so if I put that to 700, right, it should now come at the steeper angle so that you still get the okay. That's probably me. Yeah. So now you can see that's what happens with dispersion compensation. You've got the, you've got, there's no lateral dispersion but there is longitude and there's a sort of squashing effect. Now you might say here, well, what you're interested in is that this back, the back face, you want it to be nearer where it was before. Okay. So that means now the way just intuitively I think that if my recording reference beam, if it was closer then as you pull the reference away, that should correct it. So I'm just going to guess put 20 here. So I'm recording with a shorter reference beam so that when the collimated beam comes along the back face goes back a bit and that's wrong. Okay. What's happened there? It's in that case, it's the other, ah, that's reconstruction, I'm sorry. This reconstruction. So it's the recording I want to change. Change that to 20. Let's start again. Yeah. That was doing it the other way, was pushing it forward. That is close. Okay. If we make it 10. It's going to go wrong, isn't it? Come on. Yeah. It's coming a little bit closer but then you get the magnification here. You've got the back is getting bigger, the front is getting smaller. Does that make sense? I mean that's what happens in holography. Okay. So I'll put this back to 100. What else have we got? We've got here, I'm going to put this on delayed so that it doesn't every time I change something, it doesn't run in the background. If you want to change several things, then that's better. Supposing you've got an H1. Now I thought this is going to be very, very tough because I'm going to, you know, I made an H1, how am I, actually it's simple. All you do is you put an H1, you put the four corners as part of the object and then any distortion, you just move those, you just do the geometrical distortions on those and you can see where the H1 goes. So supposing you're H1, at the moment there is an H1 but it's not visible so it doesn't show it to you. Okay. So let's make H1 visible. Put this back on immediate. Okay. So that's your H1. I'll put the observer back on zero. You won't see much difference apart from the fact that you see H1. I'll go to what to draw. Let's get rid of the object. So we just see the image. So that's just the image and you can see it's sort of pushed together, which is what you get. It's still a dispersion compensation. What should we do? Right. Let's start with the first one. What should we do? Right. Let's take H1 now. There's a thing called Vignette's image. At the moment, if I put the observer at 45 degrees, it still shows me the image. But actually you wouldn't see it from there because H1, there is no H1 there. If I put Vignette's image, that means it'll only show the image when it's visible through that porthole. That's okay because you're looking through there. But if I say the observer now is going to say 30 degrees, you may get some of the image now not being visible. Actually, all of it is not being visible. So if I put minus 20, you should be able to see some of the image and not the rest. Just guessing that's the right number. It's 25, I suppose. See, I didn't practice this. Hmm. Okay. I'll have to show you what I cooked before. What happens if I get... So 25 doesn't. 22. Oh, there we are. It doesn't look at the two eyes. It just assumes you've got one eye in the center. So that's just... So you can see from here you can see half... That part of the image is visible, and that part isn't visible. Yeah? Because that H1... Now you can do... I'll show you animation here. With animation you can see... You can get an idea of where it goes. Now I also have... If I look at H1, just to see what's going on. You can say show vignette. Just an idea. It'll actually show you two pyramids of the volume through which you would see the image. Okay? Now, it's... That's very close to there, but you can see it's drawing a cone through the center of your... between the eyes. Right? If the eye goes back a bit further, the observer goes to minus 10. So you can see this plane here, there's a sort of long triangle. That is cutting through the image. Okay? You wrote it, I can't... Yeah. At the moment I can't, but I haven't got it in the interactive thing. I can go in. I can go into the the file. You mean the viewing position? Right. So, CV orientation of the control of the plane. I can do one at a time. What I haven't done is implemented that graphical interface. I'd have to go in. If I go into the main file, there's a camera here. Okay? I can say camera declare... at the moment, you've got camera location is what I call camera perspective. If I say camera left, and if I... if it's camera left, it's best to switch off the perspective. Right? You can choose perspective or not. So, that'll give you the left-hand side. If it works. Yeah. So, that's looking from the left. You can see the cone. What else have I got here? I'll just get rid of that vignette H1 vignette vignette image. That's vignette, but it's not visible. Yeah. I think what I'll do is I'll just go in. I think what I'll do is because I'm not quite relaxed up here. If I can give you ones that I've sort of cooked before. Okay? This is where you change the reconstruction beam. Now, I didn't show you animation. That there... I got too many things in here, so I forgot. Um... There are two things that I have in here. One is multi-image. You can take anything. For example, you can say, I want the observer to be at zero. But, give me multiple images with an amplitude of 40-45 degrees. And give me 10 of them. And they'll all be superimposed on the same image. You can say, I want my object to be multi-image. I want, say, rotating around the center of the H2. So you'll get an arc. Okay? On top of that, we have animation. You can say, I want to animate this with so many frames and change this parameter. So, rather than a live demo, I'll show you what it does. So this is, as you can see, this is a normal distortion of the object. This is the usual swing. This is the aberration that you get. So here you can have an idea of what it looked like once you've made your hologram. This is the same image, but with dispersion compensation. It shows you, there's, obviously, the dispersion is gone. So it's gone back to the center. You can see that the distortion is less as well. Sorry, the aberration is less. It swings around much less. So dispersion compensation is a good thing to use. Same thing, this time with animation and with multi-image. This time I've said it makes me multiple images, but use 30 wavelengths around this center wavelength. So you've made a hologram of a cube. You're reconstructing with white light. This is what you get with white light. You get the dispersed image. Right? Sorry? Yeah. This is the, what you see here, these rays are just lines to the center of each image. So that's, you know, you can forget that, but this is the blurring, exactly. That's exactly the spectral blurring that you get. Okay. Same thing with dispersion compensation. So it's automatically compensating for dispersion. So it's as if you had the diffraction grating before. Right? So you can see you've got only the longitudinal dispersion left. This is, this is an, this is an arc with, again, multi-image, but the images you've got, I don't know, 50 images going around. So if you had to make an image, suppose you say, I want to stay, I want to look from this position and I want my final hologram at 650 nanometers to be precisely an arc. You can work out what the shape it should be before you make it. Here I've put the camera where the eye is. Okay? So you can get a more realistic picture of what you get. And this is, you've got vignetting on here. So it doesn't show you the image if it's going off the plate. And here, vignetting is on H2, not on it. There's no H1 here. But you can put vignetting on H1 and H2. So you can see where it cuts off, where the H1 is cutting off, where H2 is cutting off. This is a nice one. This is multi-image with different wavelengths with an H1. Okay? So you've got the dispersed H1s out here. This is a full aperture hologram. Image plane, full aperture. You can see in the plane, it's sharp always, down here. But in front and back, it gets more and more blurred and it gets distorted. And as you get to the top and bottom, you get this color fringing because you can only see through the blue or through the red masters. Right? Same thing, this time having a narrow H1. This is a rainbow hologram. So the H1 is now much narrower. So from any position, you can only see through one wavelength or a narrow band of wavelengths. Again, you get all the distortion you get in a rainbow hologram. Same thing, now animating the eyes moving in and out. So you can see when the eye moves here, you get pure green and no distortion. You can see the cube. And as you go back and forth, you get the usual stuff that you all see. Is that right? Thank you. Positive confidence from my good friends. What else do I need? I want a big applause at the end. It's a real one. Okay, guys? To impress you guys. Huh, what do you think of this? This is animating the height of each one. And you can see it's going from full aperture to rainbow. So you can animate anything you like. You can animate whatever parameter changes you can put in here. I think that's it. Except to say the software is still I mean, I'm not a real programmer so it's not in a state to say, well, this is it finished. But it's a bit flaky. If anyone wants it, I'm happy to give you. It's free of, Poveray is free and this, the thing I've written, that's free under LGPL. So it's open and free. And the interface revolution is not free to create things, but it's free to actually to run. So that's available as well. So if you want, I can send you all of that. And that is it, I think. Yes. North Wales, how you doing? I always wanted to be a rock star, but this is as close as I got. What's wrong with your app? Oh, you threw it out. Is there a mic? Okay. Okay. Okay. Okay. Sorry, I'm still the mic man. No, no, it's okay. Can you please go to infrared? Is it infrared? Yes. Yes, any, I can put any wavelength including infrared. At the moment I have a a, a, a what it does automatically, it takes the wavelengths and it makes it the color and the brightness that it would be. So as you get towards the 700 nanometers it becomes darker and darker. So if it's 750, you wouldn't see it, it'll be black. But you can change that to show you. I mean, you can switch that off. That's just a thing, it's sort of an intuitive thing. But yeah, it's a, it's a, it's just the same equations. Yeah, you can shout if you like. It's off, but shout. Don't worry. It's on, previous version is on the, yeah, it's on holographer.org. I'll put this one on. I'll put this one on for you, yes. It's possible that it's protected when, when you get or think it's or feel free to No, it's free, there's no, there's no password protection. No, it's on, it's GPL so you can't password protect. Yeah. Yeah, it's on, it's on, it's on, it's on. But beside all of this, he's going to create an ISPH family album on the website. But I already contributed all my slides to him and we'll collect it anytime it's like, especially real ones. The embarrassing ones for other people. Yeah. So if, if there's a lawsuit or anything, I get it, right? Is that, is that a good thing? Yeah. Yeah. Yeah. If you, if you send it, I don't, I think we should probably work out some kind of structure to it so that, you know, it, you know, some sort of database. Yeah. Yeah. Me and my big mouth, huh? When's, when's the banquet? I don't know. Thank you.
In display holography, when the reconstruction wavelength or geometry differ from those of recording, the image is, in general, distorted and aberrated. These variations from the original object are hard to predict using the usual optical equations, which are best suited to imaging systems where the pupil of the system is known a priori. Here I describe the latest features of a computer program (HoloPov) developed to predict and to graphically display the distortions and aberrations in display holograms. The program has an easy to use graphical user interface, and can produce animations.
10.5446/21295 (DOI)
Good afternoon, ladies and gentlemen. The title of my talk today is the optical reconstruction of digital hologram using cascaded liquid crystal spatial line modulator. In this talk, we will propose and demonstrate the cascaded liquid crystal spatial line modulator with two liquid crystal panels for displaying the digital hologram for 3D dimensional object reconstruction. Here is my outline. First, I will give a brief introduction about the digital hologram. Then we will show our computer simulation results. And also we will set up cascaded liquid crystal spatial line modulator system for optical reconstruction. And finally, I will give a conclusion. As we know, the digital hologram can be easily recorded by the computer or can be recorded by a CD sensor. And it will be easy to make a copy and store in the computer. And also it can be reconstructed by using the spatial line modulator. In real time, you can construct the 3D objects. Here, I show the optical architecture of the Hologram. You can know these equations. We are just talking about the architecture. We can use two beams. One is the object beam and the other one is the reference beam. Both beams come from the same direction and make a difference pattern in the Hologram plane. We know the relationship between the object and the Hologram plane. Then we can calculate the Hologram distribution of this Hologram and get the digital Hologram distribution. Every time we get the distribution of this Hologram, then we can use this reconstructed architecture to do the optical reconstruction. We put the Hologram here and illuminate it with the plane wave. At the reconstructed plane, we can get the object here. For example, if we use the recording architecture, we input the image in the object plane. After calculation, we can get the digital Hologram distribution of the digital Hologram. As we know, the digital Hologram is the compressed mode. Here, we show the amplitude distribution and the phase distribution. After we get this Hologram, we can reconstruct the digital Hologram with different modulation modes, especially the modulator. If we use the compressed modulation mode, then we can reconstruct good image quality here. If we only use phase-only modulated mode, we can only reconstruct images like this figure. Another one is if we use only the amplitude modulation mode, we can only get this reconstructed. It's not very good. But this three-figure, we can see if we want to do the optical reconstruction, we need a special line modulator which can operate in the compressed modulation mode to display this compressed digital Hologram. Otherwise, we cannot get this better image quality. Unfortunately, there is no commercial liquid crystal special line modulator can operate in the compressed mode. Therefore, we propose a cascaded special line modulator which combines two liquid crystal panels here. One is operated in the phase mode, and the other one is operated in the amplitude mode. Combine these two operating modes, and we can get the compressed modulation mode. Here is our architecture for the cascaded special line modulators. Here is our optical system. First, we should measure each one for the modulation properties for each liquid crystal special line modulator. Here is the interferometer system to measure the modulation properties of the liquid crystal panel. After optical experimental measurement, we can get the modulation properties of the liquid crystal special line modulator as these two figure-share lines for the phase mode and the other lines for the amplitude mode. Combine these two liquid crystal panels, we can get the compressed modulation mode as this figure-share here. By using these modulation properties of our cascaded special line modulator, we can simulate the computer reconstruction of the digital hologram. As here, if we use three-figure, we can see if we use the compressed modulation mode, we can get better image quality here. That's for the two-dimensional optical reconstruction. This is for the computer simulation result. We also do the optical reconstruction. As we can see, this special line modulator is operated in the compressed modulation mode. Then we can get better image quality in this case. That's for the two-dimensional image. Then we can do the three-dimensional image recording and reconstruction. Here we set up the optical holographic recording system for the three-dimensional object. Here is our recording system. We record hologram by the CCD sensor here. After some computer calculation, we can get the digital hologram as these two figure-share lines. This digital hologram is the compressed. We can show the amplitude distribution and the phase distribution here. After recording, we can do the computer simulation for the reconstruction by this equation. Here is the computer simulation reconstruction. We can see if our digital hologram is in the compressed modulation mode, then we can get the best image quality. That's for the concentrated image. For our concentrated special line modulator, the modulation property is not like this compressed mode, not like a perfect idea one. If we do the computer simulation, then we can get the reconstruction image. Here is some noise coming out showing here this figure. We propose another composite technique to composite our modulation properties of the concentrated special line modulator after the composite. We can get a better compressed modulation mode here. We can also do the computer simulation with the reconstruction. As we can see here, if we operate in the compressed mode, then here the noise disappears. We can get a better image quality by using our concentrated special line modulator. We do the optical reconstruction for the three-dimensional object. Here is the constructed image. This one is the composite modulation mode, and this is the composite technique. We can see here this figure shows a better, brighter image quality. We record the 360 hologram by the recording system. We display into our concentrated modulator the special line modulator. Here this video shows the optical reconstruction in real time. You can see the actuator can rotate 360 degrees and reconstruct this digital hologram in real time by using our concentrated special line modulator module. Now, here is my conclusion. We have proposed a concentrated special line modulator module to reconstruct the three-dimensional image in real time. Thank you for your attention. As a quality of this image, I think you will depend on the solution of the CCD sensor. If we can get a better CCD resolution, then we can have better image quality. Yes, and all the time these devices get better and better. Yes. The potential in the future is very interesting. Thank you. Where did you get your special line modulators? We bought a special line modulator from some company. This special line modulator is made by the Sony company. Sony? Uh-huh. One more question. What sort of angle of view do you get on these holograms? View angle. Yeah, the viewing angle. For one, the hologram is about 3 degrees. It's very narrow because the size of the CCD sensor is very small, so only 3 degrees. Did I see that you are using two different pixel sizes, 23 micron and 7 micron? Yes. Yeah. Because on the commercial device, we cannot get the same size for the CCD sensor and the special line modulator. So we can calculate the reconstructed image will become larger in this case. Uh-huh. We have done some calculation for the resolution between these two, but I didn't show you. Okay. Thank you very much. If there are no more questions. Okay, so thank you very much again. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
We propose and demonstrate a three-dimensional object reconstruction technique that uses complete amplitude and phase information of a phase-shifting digital hologram by cascading liquid crystal spatial light modulators with a nearly full range complex modulation. The cascaded liquid crystal module is performed by operating both in amplitude-mostly and phase-mostly modulation modes. The amplitude-mostly modulation is applied by minimizing the phase variation, whereas the phase-mostly mode is achieved by minimizing the amplitude variation during the voltage-driven period. The transfer characteristic of the cascaded liquid crystal module is analyzed by the Jones matrix method to yield the suitable polarization states for realizing full-range complex modulation. It is well known that a digital hologram can store both amplitude phase information of an optical electric field and can reconstruct the original three-dimensional object by numerical calculation. This work demonstrates that it is possible to reconstruct optically three-dimensional objects using complete amplitude and phase information of the optical field calculated from the phase-shifted digital holograms. The use of both amplitude and phase information enable us to reconstruct optically three-dimensional objects with fair image quality by selecting the orientation of polarization and the modulation conditions of the cascaded liquid crystal module. Both analytical and experimental results are presented and discussed.
10.5446/21297 (DOI)
こんにちは、私はクリヒルサットです。日本の宇宙の宇宙の中央です。私の話はここに表示されています。この場合、私たちのメッセージを4つ提供しました。リコードを撮影するため、ワイズ3色のイメージを作りましょう。まずは、インラインフォルグルムのコンテナーセンスです。ビジュアルフィールズのアルジビイメージの仮面を見るため、シムリテニアスレコードのRGBイメージ、そして3Dディスプレイを使用しています。この年、フォルグリックディスプレイは、RGBイメージでディスプレイを使用しています。他に、フォルグラフィーが音を取り除くため、シシディーのパイクセルの使用は10年前です。私たちが3Dディスプレイを使用しているプログレーンを使用しています。1. リコードの動き3Dカライメージのコンテナーセンス2. ビジュアルフィールズのエンラージメントを使用しています。3. インラインフォルグルムのディスプレイを使用しています。4つ目の目標を持っています。4つ目の目標を提出するために3Dカライメージのフォルグラムでディスプレイを使用しています。2. フォルグラフィーシステムのディスプレイを使用しています。3Dカライメージのリコードを使用しています。3. ビジュアルフィールズのエンラージメントを使用しています。4つ目の目標を提出するために3つ目の目標を提出するために4つ目の目標を提出するために3Dカライメージのフォルグルムのディスプレイを使用しています。4つ目の目標を提出するために3Dカライメージのフォルグルムのディスプレイを使用しています。まず、ディスプレイを使用しています。フェイズシフトリングフォルグラフィーを使用しています。フェイズシフトリングフォルグラフィーのコンサインを使用しています。フォルグラフィーのイメージでのディスプレイを使用しています。このコンパネルを使用しています。このコンパネルを使用しています。このコンパネルを使用しています。このコンパネルを使用しています。このコンパネルを使用しています。このコンパネルを使用しています。このコンパネルを使用しています。このコンパネルを使用しています。次に、このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイのダイアグラムの形で、サプタンシャリ、カラー、ガナッツ、高いクアリティ、シャルティ、オーダーナーの形で、サプタンシャリを使用しています。次に、このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。次に、このディスプレイを使用しています。まず、ディスプレイを使用しています。このディスプレイを使用しています。このディスプレイの形で、シムテニアスリを使用しています。シムテニアスリを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。次に、このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイを使用しています。次に、このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイは、コンジュゲートライトです。このディスプレイはコンジュゲートビルです。次に、このディスプレイを使用しています。このディスプレイは、3D絡めで、ビジュアルフィールスを使用しています。このディスプレイは、マルチチャンネル CCDです。このディスプレイを使用しています。このディスプレイを使用しています。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。次に、このディスプレイを使用しています。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。このディスプレイは、マルチチャンネルのスライプの色を見つけます。ドリズムトラーレの謎文は、詳細が大きく描かれています。マルチチャンネルのスライプ。あっ、読みますか。ようこそ。謝謝。覚悟して there is a call from boltz今回の動画は aeromaniaどうもありがとうございました。大概は20050年に約1直度、現在は初心者と私たちとコメントの言うお話 go to the二番方が今回お足そこでお夢見せで今日あなたはつまりは花の群雑で立つのは幸いでしょうか。あなたはしかし、一つの部分を同じように、フィー・ハップ・フィー・バイハップ・フィーを取り出すことができます。そして、一つのS1を取り出すことができます。お聞きしたいのは、ペイズシフトのメソ?のEST、這一きなんですが、これで後は、うん。LCDは3種類のウェブレンズを使って、同じフェッシュを使って、同じバリューを使って、ウェブレンズを使って、私のプロポーラーのウェブレンズを使って、私のプロポーラーのウェブレンズを使って、私のプロポーラーのウェブレンズを使って、私のプロポーラーのウェブレンズを使って、私のプロポーラーのウェブレンズを使って、私のプロポーラーのウェブレンズを使って、私のプロポーラーのウェブレンズを使って、私のプロポーラーのウェブレンズを使って、私のプロポーラーのウェブレンズを使って、私のプロポーラーのウェブレンズを使って、私のプロポーラーのウェブレンズを使って、
A phase-shifting recording system is developed using a color CCD, a high-resolution reflective LCD panel, and red, green, and blue lasers. The phase of RGB reference lights can be precisely shifted by changing fringe patterns displayed on a reflective LCD panel. Since the phase shift in the present method is independent of the wavelength of the light, RGB in-line holograms for practical color images can be recorded at the same time by adopting a high-resolution color CCD. Wide 3D color images of high quality are reconstructed from the recorded complex amplitude in-line hologram. We record complex amplitude in-line holograms with the multi-channel CCD and reconstruct 3D images from the holograms with the multichannel LCD modulator in order to extend the viewing zone or the visual field of the holographic system.
10.5446/21335 (DOI)
Okay, so please first of all, allow me to introduce us for more time. My name is Martin von Nansky and my colleague's name is Pete Tolesky. We are from the support center for degree and impact students at Commonwealth University in Bratislava, Slovakia. In this presentation, we would like to share with you our experiences with organizing courses at our center for blind people with Honda, as well as with localizing of the Lytovanda to Slovak language. So our presentation is basically divided into two parts. In the first part, we would like to share our experience with courses with Lytovanda. And in the second part of our presentation, we would like to share our experiences with the dating of the Slovak version of Lytovanda, for localizing the software. We would like to provide you some basic information, then the evolution of our goals during the last two years. Then we would like to share the problems we faced during organizing the courses and also with localizing the Lytovanda. We would like to talk about our results and also the future plans. So before we start to share the experience with the courses that we organized for the blind people and for-day teachers at our center, let me tell you why we decided to use lambda and why we are looking for some editor. Two years ago, we accounted various problems, because more and more Slovak blind people and Slovak blind students get integrated into secondary schools. So they just have to go to the standard schools, so no more specialized schools for disabled students. And they started to have problems like there was like 30 students and they couldn't use any more mechanical typewriters because it was too noisy. Then the second problem we have in Slovakia is that we are missing the Slovak National Bail Standard. And only very few students and very few blind people are using the Bail display. Because also in Slovakia it's much more easier for the students and for the people to get the financial support from the state for the laptop or for the PC screen reader than for the Bail display because obviously it's much more expensive. And in the last years, more and more students started to complain about communication with their classmates, with their sighted teachers. The teachers had no time to explain them the topics, so they are only learning like in the Bail, so after the classes and teachers, they are giving them just some limited information. So we are looking for some editor of some way, some way to help them. And two years ago this opportunity agreed that maybe Editor Lambda could be an eventually solution for our problems. During last two years we organized four major courses. The first course was like one-ten course, it was like four-month course and we met with blind people like once per week. And we organized in the elementary school for the visual impact students. And on this course we were using only the English Lambda product type. We counted various problems also because the version of Lambda was only the product type and had several parts. And another major problem was that Slovak students and Slovak pupils are not familiar with the English vocabulary of English mathematics. So afterwards we decided to create a Slovak language version of this editor and we tested it because we like to know if this could be the possible solution for our problems. So if Editor Lambda could be a real benefit for Slovak blind pupils and if it could be a real help. So we created the Slovak language version and afterwards we organized two intensive courses to do like one day intensive courses for Slovak blind students and pupils in our center. So we just invited the Slovak students which are integrated in the various secondary schools in Slovakia to come to our center and we gave them basic information how to use Lambda and provided information that it could be help to work with Lambda. The next program or the next issue we counted is that it's not only enough to teach the students how to use Lambda and how to work with Lambda. But also the teachers should be aware how to use Lambda, what is Lambda and how to work with Lambda to enhance the mutual cooperation. So in the February of 2008 we organized also one intensive course for all the teachers of our blind students in secondary schools in Slovakia. So the long-term goal as we already mentioned was to testing the benefits of Editor Lambda for Slovak blind students. So if this could be real help for our blind pupils. So here is just the basic statistics. So the first course took part in 2006. Afterwards we organized the Editor Lambda and organized two intensive courses in October and Winter 2007. And the last course for teachers of our blind students was in February 2008. The first course took part in the special school for visual impaired students and all the other courses at our support center at Cominus University. The participants as we already mentioned were not only the blind pupils but also the teachers. We had one major goal to test Editor Lambda if it could be the benefit for our students. But during the last two years evolved also several minor goals or you can tell temporary goals. The first was to introduce Editor Lambda to Slovak blind students to show them the benefits of this Editor while they work with advanced mathematics. And in the first course we were testing the English version of this Editor Lambda. It was in the time of prototype. Afterwards when we created the Slovak language version our goal was to test the Miss localized version. And in our last course the goal was also to find out how many students are using Bayer code and Bayer displays while working with mathematics and in what extent. To know in what way we should go in the future. Because as we already mentioned in Slovakia we have no national Bayer standard. So we would like to find solution for this issue as well. Whether we should adopt some national standard for advanced mathematics already existing in another country. Or perhaps to invent our own standard or if nobody is using Bayer code while working with mathematics. Maybe there is no need in this time in this point to adopt Bayer code standard for advanced mathematics. The same major goals when we were or while we were working with the teachers and while we were competing with teachers. And first was to discuss with them the cooperation in the class with the blind pupils. And also we wanted to introduce them not only to London but also the linear style of creating mathematical expressions. Because comparing to the western European countries in Slovakia the teachers or the teachers which are teaching in these times. They are not used to work with Latif and with similar software which are using the linear style of creating mathematical expressions. They try and also problems to work initially with London. Because for slightly teachers there is a bit new thing. And our third major goal while cooperating with teachers was to show them the benefits of using a little London. Not only while working with the blind students and pupils but also for themselves. So to find the benefits also for sighted users. As for example Graphic View. We faced several problems during our courses. And at the beginning when we were testing the English prototype one of the major problems was that we couldn't communicate with skilled users. Because in that time there was no skilled user of this editor. So we faced several problems because we didn't know how to solve the question of the users of our students. So basically at the beginning we had problems with incomplete trial versions of the prototype. And another problem which is up to now is that editor London hasn't complained so much about national standard for her mathematics. But it's not the fault of editor London but it's because since we still don't have national standard for her mathematics. And while we were localizing the editor London, several problems appeared because editor London still was getting updated. So we were doing these localizations still we translated some functions and then editor London get new functions or lost some previous functions. So just my problem while we were localizing the software. And we have two problems while communicating or cooperating with the sighted teachers. And one of them is the computer literacy of them because they are not skilled, they miss the important IT skills. And as I already mentioned the software teachers are not used to use the linear way of creating mathematical expressions. So it's also difficult for them while working in London. So our results, our major results had to be introduced editor London to the course participants. We received from them, almost from all of them very good comments and remarks. And the only found out that the users of the blank papers in Slovakia, they prefer to use each synthesis comparing to the use of the fail call. Almost very few students in Slovakia, very few blind students in Slovakia are using fail call while working in mathematics. And yeah, during this course we discovered several programs with original and also with the localized version. And the result was that we created user manual for a lot of blind students. So we could conclude that the resources were benefits for our center as experience and also for our participants because they still use the lambda and getting positive feedbacks. And we believe that the lambda can be an easy use for help of Slovak prime pupils and also the localized version is helping them to improve their efficiency while working with mathematics. So this course is in localized editor lambda is the big step to us enhancing the world with advanced mathematics of Slovak prime pupils. So our future plans in our support center for visually impaired students are to let editor lambda to be used by blind community of blind students and blind pupils. And we continue to organize such courses as we did and also to organize new courses for only experienced users of course with advanced features. And of course you have to take into account the user's comments and feedback you only get. And our last but least our goal is to monitor the situation of using editor lambda by Slovak students and their teachers. So we created some kind of mailing list and try to keep in touch with the teachers and the students to see how the situation is. We use the lambda and the programs and if they need some help. So now I give the word to Peter. So a few weeks ago we sent to our students to our users of lambda editor some kind of small questionnaire questions concerning using lambda. And we received few unfortunately only few responses and here are some some here are some information from our students. First of all a majority of our students using. The majority of our students using functional functionality of lambda and I think working with these structures so functions which allows users to insert spectrum mathematical structures. This is important for us because we had some experiences with some students who before they started working with lambda they used some other linear codes. And for them it was not easy to switch to other kind of input in mathematical characters. So they also in lambda used old linear code and this was not good for them because they lost all nice functions of lambda functions concerning structure browsing and visualization conversion mathematics to the visual form. Major part of participants is also using combinating to standard mathematical form. I think this was expected and it's really working in our country. Only very small part of users in our country asked for braille support in lambda. The reason is as Martin mentioned because they don't have possibility to braille displays or it is not easy to ask for financial support for braille displays. So they prefer using speech things in the sense that is. Many of our students had problems with big amount of shortcuts in lambda but this is this is I think normal problem or each editor which provides some some interesting functionality and provides also some possibilities to work optimally as many shortcuts and so this was not a surprise for us but this was a problem for some students. And to be being a non-experienced users it was hard for them to quickly type mathematical symbols because the problem is shortcuts and so. Many users asked for some other courses concerning lambda some adventure advanced courses and I think it's important mainly because of it is important to have some possibility to train to improve the speed of inserting characters because it is. Inserting mathematical symbols because it is not easy to do this during standard lessons. Student must concentrate on mathematics and not on using editor so this is important for our students. We have only small part of teachers using one time. Our students asked also for some some features. First of all there are some books there are some bugs in any third and they asked for fixing these bugs as an example there are there is a problem with nested fractions and this is this problem problem exists very long and we had some some feeling that mainly last year the development of lambda is somehow frozen so this is. Sometimes it is not easy to quickly find some particular part of the document for example. Some some part of expression so some students asked for some features like place marking it means for some possibility to use it some kind of place mark and have a possibility to jump jump to his place mark later. Few users also asked for self voicing in the room and there is a self voicing in the lambda and some of our students uses lambda on computers where is not just screen reader or or window wise. And this is a problem for them because there are for example computers with MVVMVDA which is an open source screen reader and I think it could be nice to have a support for MVVMVDA or improve self voicing because now so voicing is usable in reading mathematics but not in dialogues and not in menus and so. One student also asked for say to text feature something like save as text file what lambda says through self voicing. Okay that's all from that's all about feature requests and on these slides there are contacts and information about resources for pages and so. So thank you very much for your interest and interest and. No. Questions. Thank you.
Two years ago the Support centre for visually impaired students at Comenius University in Bratislava started with localization of Lambda editor. After successful adaptation some few blind secondary school students were equipped with the software. The first part of presentation informs about our experience with this system, about the localization process of Lambda. In the second part offers the basic information about Lambda courses for secondary schoole students and teachers, about experiences collected during these courses but also during usage in real environment (mathematical lessons, exams, homeworks…).
10.5446/21338 (DOI)
Good afternoon, I'm Lorda, I'm from the University of Milan. In this talk, I'm Christian, who presents our course of multi-model notation for mathematics as a tool of code for both visually impaired and blind people. Then we illustrate a first step for multi-model notation in the case of restructuring. Notation is widely recognized as a tool of code for both using both reasoning and communication in several scientific contexts, chemistry and particularly mathematics. However, to become a tool of code, each notation expression must be properly perceived. Blind and partially silent people run into difficulty in making, exploring and understanding mathematical communication and scientific concepts designed to be visually represented on paper or screen, for example. Our goal is to develop new multi-model notation for mathematics to enable blind and partially silent people to perceive and reflect on mathematical concepts that are specially presented to manipulate expression such notation and to communicate with silent people as well as each other their reasoning and experience with mathematical notation. In our work, we conceived multi-model notation for mathematics as a digital medium for thinking and communicating mathematical concepts that adopt a mechanical language of mathematics and that mask its expression perceptible as a set of multi-model signals, both in visual and cognitive contexts, for example. Multi-model notation for mathematics is needed to overcome the flows of existing notation for blind and partially silent people. Central notation, in this central notation, of course mainly sequential axis and it does not permit diamond relation. On the other hand, speech and whole notation supports no apparent method representation of concepts. In short, such notation is partial and passive to all the thoughts. According to our approach, each notation expression is located in a virtual free space. Each user can experience and have a box where the outside walls are ultimately perceived to constrain the user within the box while having user orientation within this box. The 3D space can also be perceived as a audio box where audio signals are imported users about a reserve position in the space and the space can also be perceived as a visual box where visual signals denote the box components and a visual proxy denotes the user position in the space. The 3D space is the virtual environment where the user can create and manage mathematical expression through diamond relation. The interaction with 3D space and mathematical expression in multi-model notation occurs through serrated values, obviously keywords, screen, loudspeakers, and passive devices, specifically the phantom device. We have chosen to facilitate and to permit a sufficiently wide 3D workspace and to create a force feedback. As a first step toward multi-model notation for mathematics, we have posted our study on graph structures which are mathematical expressions that represent various scientific concepts from calculus to automatics and forth. Specifically, we have developed a prototype that supports visual impaired and studied people to create and explore multi-model expressions representing graph structures. Now, Christian will show you how to create and explore graph structures through our prototype. Just to create a graph in a 3D dimension. I can move in a box which can be perceived through the force feedback. I can place notes in this workspace, for example, I can move wherever and insert a note freely or I can insert another note. I can connect these notes by engine. To find the second note, I can start the creation of an engine and I have to find the second note. Here, once I found the time, I can connect it. To find the note, I used an exploration modality which is a driven exploration modality. I was attracted by force directly to the note present in the workspace. According to the distance, they are sorted according to the distance from the starting point. I can then create another example, another engine and another engine. For example, I can start, I move, I place a note and I connect with the start in the source node. I can also label the notes, just to give them a label. I can write a label, for example, if it is a molecular structure, I can insert the name of a camp. When moving in the structure, I can ask for the name of the note in the game by using the Phantom device. Just about the exploration, I have just used the exploration modality based on force attraction. It was for notes, so it means I was directly attracted to the notes in the workspace. I have another modality, another mode to explore the ground. In particular, from a note, I can be interested in understanding which are the directions where edges are going out or are entering the note. To do that, I have an exploration modality, which is the two rock edges, and I can activate it, and then I am attracted along the edge and back. Doing that, I am able, at the end, to find the right direction to leave the note and to reach the neighbor note. It is also possible to do some other operations, for example, you can sell or rename and so on, which can be shown in a trial demonstration. Let's come here to... As we have seen, we have used these three modalities, three, two rock notes and two rock edges, and I just wanted to give another view of the workflow to design and develop this system. This prototype was, at first, developed according to a usability cycle, and two sets of experiments were done. At first, with this prototype, which aimed at assessing if the optical representation of ground elements was available, and the second series of experiments was done to especially assess the new tools of edges and of notes, which are the exploration modes we introduced. They were performed with pre-designed students, which were experts both in the optical device and also in the work style of computer science. Let's come to some results. These results show that the work with the exploration of the tools, the exploration by tools with two rock notes and two rock edges, decreased the time to detect, to find, to locate an object in the workspace, or to locate an edge, and where edges were present around an object. Also, when a user was really exploring, he was not sure or constrained, he was not sure to have found all the notes in the workspace, but using tools at the end was sure because he or she was attracted to all the elements present in the workspace, notes or edges. For the world, we focused especially on the evaluation of other evaluations with the sighted persons. The use of guided tools also in other situations, for example, when there are not connected sub-graphs, or when just to be driven through sub-graphs, which are not connected to treat them as a sense of note, as a one-note, not just a sense of note. So, just specific tools of this kind of exploration. Thank you for your attention. Thank you very much. How much does that device have to cost? The cost of an active device. The one at your hand. The one at my hand is about 2,000 euros. For example, the five devices which can be a choice is about 200 dollars. It can be improved the cost effectiveness by using a different active device with the same principles. Our speakers finished a little early, so we have time for two questions from the audience. I'll hear one question. So those early evaluation results, that was you trying it out, it wasn't another person trying it, or was that the group of people? The first one went to groups of other groups, this one was with five blind persons, and the second one after now is with three blind persons. We still have time for another question. A difficult conceptual problem for sighted users in graph theories distinguishes between graphs that are potentially planar and bedridden as they aren't. What would the corresponding situation be here? So within visualization in graph theory for sighted users, is distinguishing between graphs which are planar and bedridden, or not obviously so, and ones that are not planar and bedridden, is the same contrast physical here with your active device? So visualization at the moment is a problem, but with the device, with the active device, if the graph is not planar, we can represent the problem.
Notation is a tool of thought in reasoning and communication. However, to become an effective tool of thought, each notation expression must be properly perceived. Blind and partially sighted persons run into difficulty in working with spatially represented mathematical expressions.
10.5446/21333 (DOI)
Lžen dvin, loosi, kaj igrat bore abrej, mi je Sam Do الشligno, še ste vsakm su goubro delaturt beli teba do svavoj student, pisamt del vermeli odpatsve dve Česká tem轉aklasti. viным hydrogenovanih k stacked ru flash heads? forestoj,voj ponenek collaboration 20 minutes appears na presentsha raha in skup clownlj. Zem s belt fleet, helpe gadget Tvarko Lilla visually education is already Glent, in je ongoing overseas held. Zamanim je to, kur biti nuk Give Me Zaman o model Pilag Iет, pa z Bobst attracting V собelo to hospitali sekunding t качеil ki to��는,きた pa je nuanceskvite v kanavih. Problem aircraft in Alexiče, jer smo zač ngườikanje krati in wiresne tisiendarnega grafita. Soli torega ma Test v našem vseh izprati�jing v rocks v sku schedule in nižak. Po realm loc asleepgel vvardelj. Vo. Ki pa absolut schneller ready v depositeljenu ta ne aprilim svično d novoj. Po vplega da rond pa načolog mп. In svojoraine rezdarim lagi, punja v ready. Per vol realize Hitler, vkr upper vsredi je shifted z izvandana glasbina in je tukaj bil,, homosexualna 72 colonizira, bolji v pas 57, je ti so analizaliinger, so bila teh vst puedesi skor bentoviti, dashb Police, desto الس굉 in pobruomeni. Takie zthejt Desanche in gafu bits je test barem. Ko ni si mnez Они skomely trenutimo any Türkiye, ljudi in poskostati tem smo jo ravno bilo nebo v več rocksu, ki fak attitude unfortunately into what Counseli za mudv sólo k konspon 맞아 pa Pistite,ater smive do na neča menju initiatives in evoč nekiver visitila, kde mi rokovili prato, ali počete, evoč se, ki rokovolo odok IGU, ta štouoč n길 bo recognitionulost in ni si po OKAMB kansi pogledaj prowadjev in tematik straightkor. So je toうž sits, davorjo informaty possible. Bagočo Sr constrained zahova� cih so v savim maske, po brosnih sva supportivek helipedOU matematikv. Movie dealeri misljeni se najkažba. implementationom in odve pastelško chili potem, kot je to ver bleeding, ko testi lep video, ver factova cheve badanje? Gotov dan su Hospital, botsagonfall za svega. Morasti so č Anthrop значne namė Trumpuisho ellas.. ki da se nestočονč v tem mudu radioicting Mercedes a s pro твоč 립 kot odlično modena in podeledom. unsereánma prosebe se jo st Jišveni sp Mohera in refounded ekipem. To je pozdravileno neštrug tudi, ki mogu res ponovlなた实no odpo Quranovak's. Kommentar ni� zo tudi. Shallakht soonافP只有 n הס dunf reopen. Zigdala in experimentova dvorezovske zrani.便 acidi kopniteljne document prayedikov, všakvaj sred 직접i serv Whiskeyta, kot videl tako vračovnok powderedanje in presented je kratita razprom Peter lance. v vнойte sredne izrodje. P subtle lahko poslediti najmoj pomedel ventilativ, kako danak negri wežit loveshtесть. Dod является, že te baz bore tez polova za odmah unexpected res sa inim svim attributednem nrvi容štvo. K photom je bang, ko ahoridimo denženo模Sir na silnjij sdji. Nače Jaz ikọi Clinical 색� Dancing 40, Viže začalaveno Chickening, začalavena, nederpostva, ki sem cell, kontrolom diezmi. Stato inWAYferstva bacteria ini mathematical tiesfi avschгаče rada.那麼 ez consisting, da situacije ga od Itostča ne jese modestce izgledati. H Safari vs Solutiona je zelo da veliko tezji zanes ez viče pasaidera...da loro70 malo na dela je, k luggage opación impacts interactive, vse modicide another finfik in vredne opasha w kolejil Librator. Č vs.honjamo in uati, nač na v� Humor, je se dej drone gaKO riotsovost kuk webinarje in sem pravili in malo oši mačlfar. Na svedku naredljama načno in delil pospe module wise, sp GoPro Resil Frauenvistsz. בכnico nozzle v nemakIT na ne vinivo nanie bo Boris je razvoj natek Ali čCommenta kako malo otročnao, because they were able to do that, i that one student in others were there. Rahve kajmon je tezneza in prob concerned in soyi inga dobanya že Zvijaving俗eli se? Mojazanem se na duček. ne v te investigation ni v nocosicjo napravo. Spolaj, ne nervuc Stejnic Krip co stej tudi cilj aktivnog nebo kilosvesti v notingjo se噢ivke domovkanjo, meni i v kor저 bol Sea2 golj, neko ci smo bila dal temipilivера drinv και n Eli watermelon. In pkeliko stages nasadi od ediblega iz gra disciplostmi. Ravnovi, da vas pri playing line je se tak loanilo, In nise dog乏ili, ne replay vanjo poro ze hologij,제�jone matko? OK, sližajte players, t Scanjuč, olan izval sincerely na taj domonu, našli lignac, mama tako fotografある imam, vidim,ό порod sie težel num Lion language, illicht Thenk and Sebali. 11 galino, tátr s3 recon, Orduzene zd давайтеא sочuffanje Vitečno, evo predstavnoaked cementnove insidnog dissolonov появ 와서 trenutno vr Неpeilu in Hejdej Evoookie Zelo apparatus ne je, da v drane i porgnaj ni zapravila release.<|nn|><|translate|> placami bagurui poseleанjem. Bagi pro lat demon reach pod nimi nešta komesč elimination z whatsitelej sem ga pov affection. Veš classej해요 je to kot so deleted. Vo saje poslash MA Still z Parkoritemimiş s t вотram Ležilo je obojo od Petra Uč MIL Solid sinkingo krvales bo dera pre twentiese, kam č parsley priti, pakrat sem se odpo lacksだ controlled miežite in rahmise no этих pa klat ne ga uhteli w rombili, judgeθulak ne mam Ohh disappointment na mo bottoms errori. I na svače-svače z nadoresъtinizi. V tazbih celjih prav про personalized malkih sklepe z tej Boholoralskih z проектov kaj da... kit več ga čefali na rigorous後悄ješčaj v � να getov irony in, transformations in remaked planet呀, in moj t pandey postavljachi tega od Mične za ta vpoljarska počega ide audi, dar leто več. Na sedu se ne umozi ta upset mač na izgledkelijo vise Second minor reka mayonnaiseiffany, premajnoj tom, pravo...... pokajaj se so zelo posuditi,在o popolite......to bi cautionaj, eso le p債 pa because well. Inm bojčko gr Zhongličko ne boj miselej s ne Buttontes discriminatibagsč bounced si pred потом s toj bojčin in režim Set ti bos. vičo si proj otevi parts like spidov ol charged. Zelo konucines navpe vteličnako še deli neti v mogu. Poloče to skrapi so in v Picture City so odvisela obrima, bil coordinate, lahko zgodil postavim v moleelosti ko skupmaster ne absolvitega koje se domne muze Pentagonенsega, na walks dane v latter. in očistvamもč Matko na pr motherboard, in yačo malo pa dogodno barjen. In ni možno odrečivam thickenovati prel idot be brIt hoe. Chairmeto. In bazoon bo toilo. Nмат Nos, tako koristici, On symbols nudo Bahaj bo motorsga cil in je bil je velik kajko in naga ponestaj v один slim z tih občito so vzeliči. Lepa sodatrih Fit bago je tactic strojni zalikor isti, je god tato kaj po informaciju su in above pop. Milak ti ga bi pl Nature bil izrežib nje, import Et Guo isemsec po sce nad bang jaw Dostaj removal jeruh, da je komp TSB in housing. In nekaj ga React, kot si ne intuitive protests give halj immediate pithne. sp�epures čev? No da.user kar pa. Er. pri dolls, pri dolls, Prireyp Hearts, pri vídeos, in pri pluginosti,oversvoj ko polamo, ki spl續 o mičanju, ampak svoja Uline skor nawet na vričecin. Leto ono bolji in ingrejo. Čavanje z vlečnimi studientri 진짜. Tako tako,estlyj si preddira se, Så je formančen boh del, Slava slicedva yani, Kjer da neo je paper, Tudiodles, Čeloovanju protivarthe Trpi tukaj, Care is not co confidence in thanks to studenti. Ar dol SI radomanje. Res nam zadal posleda. Res da, ko describe quantitative biomedical plajjez arguing разvojno vzelo villagesi, PrejdaM Dr�؟
In order for a technological aid to be efficient and efficacious in the educational sphere, it must increase the chances and capabilities of the pupil with a disability, and respect the user’s requirements and characteristics while avoiding forcing. Therefore it should be easy for teachers to understand and use. This is the result our research group has reached upon concluding a period of experimentation of software developed by us called BlindMath.
10.5446/21319 (DOI)
Ladies and gentlemen, students, colleagues, we would like to present you our experience in supporting students with disabilities at the Commonwealth University in Bratislava and also short information about the opportunities for studying of students with visual impairment at Moldova State University. That was the partner in the Commonwealth Project during the last year. Some basic information about the Commonwealth University. It is the oldest university in the Soviet Republic. Nowadays, it has about 27,000 students. Students with disabilities at the universities, we can't say an exact number of students with disabilities at our universities and also at the Commonwealth University because there is no duty and there is no system of registration of students with disabilities. It means we only know students that apply for special support or at our university or at other universities. And almost at all universities and faculties, there is an Institute of Disability Coordinator who is a person, contact person who can guide every person with impairment or students with impairment and who can offer some kind of consultation also for academic staff. In some universities in Slovakia, there is also some kind of support unit. Our university was the first university that created the Support Center for Visual Impart students. It was about 15 years ago. But I can say that from the beginning, since the beginning, we offered some kind of support also to students with other kinds of disabilities because it was for a long time. Before it was for a long time, the only one specialized Support Center at the academic level and it dates back to students with other kinds of disabilities asked for help, for support, for consultations and not only students from the Commodities University but also from other universities in Slovakia. Till today, there are three specialized Support Centers in Slovakia and also the students with disabilities prefer some, I can say, tributary universities who are more experienced in supporting students with disabilities. Except for Commodities University in Bredisa, there is one university in Central Slovakia, one in the North Slovakia, one in the East Slovakia and also in the Bariya and the Support Center, I think students with all kinds of disabilities. In Slovakia, we have a very good, a very good framework that speaks about the opportunities for students with disabilities at the academic level. It, according to this law, higher education institutions are obliged to provide equal opportunities for study, for studying of students with disabilities. This higher education institution is obliged to prepare study conditions for students, adaptation for entrance exams and during other exams and also is obliged to create financial sources for covering these provisions. Our center is active in three main areas, it's educational and training activities, then technical support that includes also transformational study, literature study based in the accessible form and a set of main analysis guidance and advice and counseling for applicants, students with disabilities and also for academic staff. As for academic support, I think that I don't need to speak very deeply about this activity, but as for students who are interested in studying sciences, I have underlined that we try to contact these students in very early time, in very advanced time to be sure that these students are prepared enough for studying at the university, that they have developed suitable skills and competencies and that the transition from secondary school to the university will be not very complicated and will be not a problem why they will not be successful in English. It means that in case that we know that some students at the secondary school think about studying mathematics, informatics, physics, we monitor in cooperation with secondary school teachers, with parents, with students and maybe from this solar panel, representatives, we try to monitor their skills in reading and doing mathematics, in understanding graphics, in doing graphics, we can offer them help and some kind of training or instruction on how to prepare better for studying this kind of subject of study. That's why we also, in the last two years, we cooperate very closely with secondary schools and secondary teachers and we try to prepare also teachers for educating blind students who study in an integrated way in the secondary school, how to motivate them and how to educate them in mathematics, informatics and how to motivate them to continue in their interest in mathematics because this problem that was mentioned also by my colleagues, Petter Alekski and Martin Horjanski, there is a real problem with supporting and educating blind students in the integrated environment in Slovakia. Sometimes when they are studying in mainstream grammar school, there is a lack of absence of textbooks, there is lack of absence of accessible graphics, lack of special demonstration tools. Doing mathematics, blind students in secondary schools have limited equipment, limited possibilities to use modern technologies. In mathematics, they are very often in case that they don't use, for example, lambda in the last year, they very often work with mechanical typewriter or they work with assistants doing mathematics, they have limited possibilities to producing graphic representation. They depend on the creativity and the capability of teachers who sometimes are not very skilled in this field. And they also need additional classes what is not always possible for the school, not always offers for them. But that's why, within the last two years, trying to support preferably schools that educate blind students is about 13 secondary schools in Slovakia. And think about whether support in educating mathematics for the blind in mainstream schools does mean effective motivation for studying sciences in the university. Because in the last years, maybe five last years, we prefer our students blind and partially selected students in Slovakia, they choose to book studies at the university. They are studying law, they are studying theology, special education, teacher subjects, psychology, et cetera, et cetera. And even if they think about studying mathematics, informatics, for the last year in secondary school, they decide to study book studies. Why? Maybe because of better access to curriculum, maybe because of better access to information sources in general, maybe because less dependency of foreign health during study at the university, maybe because less demands on special technology, because experience or good experience of other visually impaired students, and preferably also because the human-variated, no, faculty or universities with human subjects of study are more open to accept students with disabilities and also to blind students. It means that this environment is more accommodating for students in these disabilities. It means that among this academic support, academic, technical, and human support, that our support center offers students blind and partially selected students, we also, in the last years in cooperation with secondary schools, we try to monitor the situation in education of integrated blind students. We prepare training courses for the blind students and also for their teachers. We are active, our colleagues are active in localization of usable software. First of all, also software for users. We also deliver information about new assistive technology. It's about new software information. Any information that can help future students to prepare better for university study, information for secondary teachers, information for students at the university. On the website of our support center, there are areas where students and applicants can find any kind of information how to prepare or how to be successful in a university study. That's about our activities. And we are also active in some projects. Within our projects, we provide accessible workplaces for blind students at the 80s. Slovak faculties of other universities where blind students and also academic staff can work with support of this assistive technology. We organize some workshops on how to use special technology in what can blind students do with assistive technology when there are limits of assistive technologies. And also, we are active in some, we are active in some international projects. One of them was the Project Tempus. So, in the last year, in 2007 and 2008. And in this project in cooperation, this technical university in Karlsruhe in Germany in the study center for visually impaired students. And this cooperation in this Moldova State University and Moldova Blind Unions was created a new support center result barriers about this support center. Can speak my colleague from Moldova, Mr. Kutuk. But there are only four minutes left. Including questions. I will be very short. My presentation will be very short. So, my name is Natalia Kutuk. I'm a lecturer from Moldova State University from Kepto from Kishina, Republic of Moldova. This year, September, we opened our center named South Central South Borders. How will we sit? It was built within the frame of European Project Tempus. According to the model of similar centers of university culture, Germany and the Communist University with Slava, the Republic. The main goal of our center is the same goal that should have like the same, this kind of center to support the integrants of blind and the bashing site into the university study and life. The center has the following software. Screen reader, Birgo 4.7, speech output and a private display. Screen magnification software, Galileo. And also, we have the saling hardware. SmartView, Extend Video Magnifier, Portable Video Magnifier. High-end decks. Bride display, where you connect the project. Two HP old station, HP mobile old station. Private printer, closer. HP laser printer and scanner. At our university, we have 60 students, partial set of students. We have the faculty. I think it's enough for us. On our center, there are two lectures. And one is a social assistance. So today, I feel like I'm a student in the first grade. It's a lot of information. So I only listen and try to remember all new things. And please, I ask you advice for our center. So I have some players about our center. If you have some advice for me, please, and listen. Thank you. Thank you.
An early and targeted preparation for the university study is a fundamental prerequisite of future success. Paper offers an overview of activities of the Support Centre for Visually Impaired Students focused on early preparation of students with VI for studying sciences at university and forms of special support for students and academic staff during the study. As a result of a TEMPUS project the “Support Centre without barriers” at Moldova State University was created. The state of the art of study opportunities at MSU, mission of the centre and plans for an internatinal co-operation will be presented.
10.5446/21324 (DOI)
Now I'm going to show how teachers or support services or even schoolmates can help visually impressed students to have their documents realized in an accessible way. Briefly the topics that I'm going to touch during my presentations. First of all the scenarios in which educational resources are useful. Then the problem in accessing scientific documents. What are the accessible output formats and how to produce documents in these formats. Then I will show some examples of what visually impressed students read in accessible scientific resources and then I'll touch an experimental topic about editing using speech input. So the scenarios of the exploitation of educational resources are lessons and during lessons usually slide presentations are used. Study materials which can be the slides projected during lessons, interactive webpages, digital documents and others and then textbooks. So briefly what are the problems in accessing scientific documents. Scientific contents for example formula, equation, expressions and so on are usually inserted in digital documents as images even though there are languages markup languages that can make these formulas different from images. And very often these images don't have a meaningful alternative description. And with what I mean with alternative meaningful description is that for example if the lesson is about Lawrence equations a meaningful alternative text is the one that reports the equation as they would be spoken not simply Lawrence equation because that doesn't help the student and get the contents of the equations. So for blind students using images in documents does not chance to explore these images so the formula is not accessible at all. And without the appropriate plugin even the formulas or expressions that are inserted with a markup language without the appropriate plugin this cannot be interpreted by mainstream screen readers. For partial excited students usually images have low resolution so when they magnify the image they lose the contrast and the quality that they need to interpret what the image is. And also the contrast is not customizable. What does make a formula accessible by visually impaired people? So first of all adding an alternative description of every image of a formula. Of course a meaningful description. And then embedding a markup for mathematics that can be interpreted to generate math and speech output for example MathML. The accessible output formats that we consider in our analysis are LaTeX or Tech, Accessible and Structure PDFs, XHTML and MathML and the Z plus MathML. So first of all how to create accessible LaTeX documents. LaTeX documents should be written as human readable for example edited using Technic Center and tools that automatically generate LaTeX do not usually create human readable LaTeX. So LaTeX is not an accessible format per se but if you edit that in a human readable way it becomes accessible for blind people. Then accessible PDF files. Also PDF files are not intrinsically accessible but if they are done in a certain way following certain rules they become accessible. So a PDF is accessible if it is tagged, if it has an appropriate reading order and all formula and all images have an alternative description. And for further reference there's a guide published by Adobe creating accessible Adobe PDF files where you find all the information you need to make accessible PDF files. How to create PDFs, accessible. So you can create a structure and accessible PDF using Adobe Acrobat Standard or Acrobat Professional starting from different sources. Those that we analyzed are MS Word or PowerPoint or Excel or Microsoft Office Suite or the Open Office Suite writer and the press for presentation and so on. The important thing you have to keep in your mind is that in order to produce a structured PDF you have to use styles in your documents. So a style for heading, one for text and one for all the other elements in the text. You can also add text to your PDF documents in Acrobat Professional on an existing PDF document after it is exported to PDF but this is not the best, neither the most efficient way to do it. How to create accessible PDF with scientific content. There are different options and those that we analyzed are using Microsoft Word plus Design Science Mod type and then Adobe PDF Maker to create the Adobe PDF or using Open Office Writer and Open Office Math and then the embedded combat export to PDF. So how to create an accessible PDF with scientific content of course from Microsoft Word. You should have at least Microsoft Word Design Science Math type and Adobe Acrobat Standard on your computer if you have professional is better. When you edit your Microsoft Word document using the styles as I told you before in this screenshot there is a heading one title highlighted and on the right you can see the Word window with the formatting styles. When you need to insert a formula you can go to Design Science Math type menu and choose Insert Display Equation. Why display equation and not inline equation because there is an on bug in Adobe PDF Maker and when you choose to insert inline equations when you create the PDF the formatting is not preserved. For example the equation is put at the end of the sentence. This does not preserve even the meaning of the document. Then once you have chosen Insert Display Equation you edit your expression in Math type and for the alternative text the easy way is to set the preferences translators, the preferred translator to Lattec. So here are the steps in detail. So go to Preferences menu and click on translators then check the option Translation to Other Language. And from the drop down translator menu select one of the Lattec or Tech options then confirm your choice of course by pressing the OK button. Once you have done that you can select your formula you have just inserted and copy it to the clipboard. Go back to Word, insert the formula. Once you have done that right click on it and on the web tab of the format object menu that pops up paste the Lattec or Tech content you have just copied from Design Science Math type. Now save the document and convert it to Adobe PDF. Here there is a screenshot that displays the document with a matrix. Now how to create accessible PDFs from OpenOffice.org Writer. Of course you need Writer and OpenOffice.org Math installed on your computer. Notice that with this approach you don't need to have Adobe Acrobat stand up nor professional installed on your computer. So edit your document in OpenOffice.org Writer using styles exactly as you did in Microsoft Word so that your document will be structured. Again there's a title here called Affraction. It's highlighted and on the left there's the window with Format Insta. And the heading one is highlighted. Then insert your formula using OpenOffice.map which uses its own language to edit equations. OpenOffice.map opens up at the bottom of the Writer window and you have to insert the formula using the language you can refer to their guide to know what symbols you have to insert to get your equations. Then edit the alternative text of the formula. How you can copy paste the markup of OpenOffice.map which of course presumes that the target user should know OpenOffice.map syntax. Or edit the Latin alternative text of the formula either with a Latin editor or by hand if you know Latin. Then you should add it to the formula using the object voice of the drop down menu and copy what you've paste and paste what you've copied on the alternative text tab. Then save your document and directly export your document to PDF using the OpenOffice.org command. Now this is the screenshot of an accessible mathematical document. Of course all cited users will notice that this is just as a PDF should be. So there's a title and there's a matrix. What a blind person reads in this document is the title matrix and then the latex that you edited for the matrix. This is the site, a very, very, very enlarged matrix that a visually, partially cited student could see of the document. You can notice that there's no loss of definition in the aspect of the matrix. Now how to create XHTML plus MathML documents using either Microsoft Word, Design Science Math Type and Design Science Math Page or an editor. If you're already coding a web page you can insert the MathML markup created with Design Science Math Type or other kind of MathML markup editor which are anyway not as accurate as MathType is. So now the first case, using Microsoft Word and MathType and MathPage. Edit your documents and your document with Microsoft Word applying styles as exactly as you would for a PDF. Then insert your content with MathType exactly as you did before. Save your document and now use either the export to MathPage icon on the toolbar of Microsoft Word or the drop down menu MathType always in Microsoft Word. How to create XHTML plus MathML documents for web pages. You can paste, in case you're editing your code in your page with another editor that is not Microsoft Word. You can always use MathType to edit the MathML markup that you can paste in your code. So you have to choose from the translator's window that we've seen before, you have to choose MathML 2.0 with the M name space, a few minutes to display your web page with Internet Explorer and Math Player plugin which is highly recommended to make Mathematical accessible for visual impaired students. So what does a blind person read in the next HTML plus MathML page? So if you display your page in Internet Explorer using the Math Player free plugin, the sentence spoken for the previous example will be, cap M equals metrics with three rows and three columns. Row 1, column 1, 1. Column 2, 0. Column 3, 0. Row 2, column 1, 0. Column 2, 1. Column 3, 0. Row 3, column 1, 0. Column 2, 0. Column 3, 1. And metrics. Scenario 1 lessons. During lessons, teachers usually show slides or write on a blackboard. If they're doing a lesson writing on a blackboard, if they read what they're writing on the blackboard, their visual impaired student can follow the discussion and other way, if he says, like for example, go to equation number five, if he used numbered equation, the blind or visual impaired student can follow the lesson. Slides can be either handwritten and projected or in a digital format like Microsoft PowerPoint or OpenOffice Impress or Google Docs. Digital presentations are often uploaded on the university school website for student information, so it is really important that they are accessible. How to create digital accessible presentations with math contents. You can use as spotted before PowerPoint with design science math type, put in the alternative text on formula and images, or Impress plus OpenOffice Math, optionally exported to PDF to make it easier and more portable your document. Or HTML slide, which is anyway quite intricate to use. The step by step procedures you have to follow to create accessible presentations with mathematical contents are the same that you use to create accessible scientific PDF documents. That is, when you use Word, you have to use PowerPoint now. When you use Writer, you have to use Impress. Study material, the second scenario. Study materials are usually exercise sheets, documentation produced by teachers and web pages or other kind of digital documents. How to create study material with scientific content. One option is to use Style Microsoft Word plus Design Science Math type to insert equations and expressions and export that to an accessible PDF with alternate descriptions on formula and images or a structured OpenOffice document exported to PDF with alternate descriptions on formula and images. Or you can use, in case of accessible web pages, you can use Microsoft Word and Design Science exported to XHTML and MathML via math page. Or websites coded by hand in XHTML and MathML, always with a talented description on images. The third scenario is text book, which a media person introduced as before to this E plus MathML books. As she told us before, the first player supporting MathML is the GH player 2.2. Production tools are still under development for MathML in the easy. Editing using speech input. During our test, we found that the process of creating documents this way could be a little bit verbose and sometimes time consuming. Even though MathType, for example, is a WYSIWYG editor, so you have all the symbols you need in different palettes and you can use shortcut keys. Anyway, if you are not familiar with the palettes and even not familiar with the shortcut keys, this operation of inserting mathematics could be a little time consuming. So we thought that the speech interface for editing scientific documents could be useful, especially if math is spoken in a natural language. So we made a prototype of a speech input interface for Microsoft Word and design science math type using Nuance Dragon Naturally Speaking for the Italian language. The comments to insert mathematics are like the natural language ones. So multiple ways of speaking a symbol, depending on the context, are mapped into one symbol. So the person using this speech interface is not forced to learn a particular dictionary of words, but you just have to read what he has to put in an accessible way and all the rest is handled by our prototype. From our preliminary studies, we noticed an improvement in the editing time, which became quicker not only for non-experienced users as we guessed, but also for experienced users of MathType. Further studies and development will be carried out in these directions because we are really in a starting phase, but it seems normalizing, so we're going to investigate this again. These are some references where you can find information about these topics. And there's the Science Network Design Science website, the PDF that I mentioned before, openoffice.org, and our paper at CISAN where we presented the speech input interface. Thank you. Yes? In addition, we know that there's an Office Suite for Mac, and also MathType comes for Mac. I guess you can use those. I didn't have a chance to actually try and test that, but I will do that in the future. Can you give an impression on what tools are available on the Linux platform, because that would be a free system? So about tools, we didn't try the step-by-step tools on the Linux platform. In the CD, in this conference, there are some presentations from the last workshop where some tools also from Linux platforms are analyzed, and you can find further information there. Actually our analysis was aimed at giving support services and teachers a way to produce accessible documents, and the Windows platform seems to be the most spread at this moment. One question from me. When I want to make an accessible open office math document, and when I insert the formulae in openoffice.org math syntax, is then the translation to LaTeX done automatically by the open office math, or do I have to supply the LaTeX version? So if I want to see it in the accessible PDF. Yes, there's no automatic way to translate open office math syntax into LaTeX. But then it's very much work. So this would be absolutely a nice and a good idea, I would say, if somebody from the open office community would write such a routine. Or a dominicational move, maybe. Which kind of syntax is it with the open office math? Is it in a sense math-ml related or LaTeX related? I think it's closer to LaTeX than math-ml. For example the fraction you have to type the word over. So it's really closer to LaTeX. But in any case, I think you gave a very comprehensive and usable study. It's a kind of handbook for people or organizations who are committed to supply accessible mathematics syntax. Very usable, I think. Thank you. Other questions? During our workshop in Paris in February, I presented on open office accessibility. And I found that open office lost the alternative text for formulas between saves. Yet still the case. Not in the case that we analyzed, actually. Because we copied, for example, the open office math syntax. We right-click on the formula and format object properties. If you go there, there's a menu with some forms you have to fill in with the alternative text. To the test that we made, the PDF always had the alternative content. Right. If you export directly to PDF, just after creating a document, then it's okay. But if you save it, close it, open it later again, then you'll find that the old text was gone. We didn't find these issues. I don't know if it depends on the open office version that you have. But we did several tests, we made lots of pages, and it seems to be working fine. Okay. I'll check with all the versions. Okay. Sorry. Other questions? The open office 3 was launched last week. And it has an improved accessibility for all platforms. So it's accessible. And the accessibility was checked. You can access on the wiki documentation of open office, all platforms, and all access software where it was checked. So it works as well on Linux, Mac, and also Windows. And it is completely free. This is a big difference. And open source. In fact, this was one of the reasons why we tested open office. So largely because being open source and with no need to install Acrobat standard, you can produce accessible documents at the zero cost. Exactly. You can export us in PDF directly. With no need of paying for the Acrobat. Other questions or comments? Yes. Yes. The concept of your speech input interface for Italian sounds quite similar to our own talk maths one for English. I realize there's not perhaps time to discuss this now here, but perhaps in a break or something we could talk about how you've implemented and how we've implemented. Okay, sure. Okay. So thank you once again. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
One of the greatest obstacles that visually impaired students have to overtake during their high school and university careers is having accessible educational resources as their mates have, namely, books, slides, study material, tests, exercises, and so on. Unfortunately, up to now having these documents promptly ready as their need arise has not been possible. In our presentation, we will show how to produce the different kind of resources needed during study years in a manner that makes them both accessible by visually impaired and easy to be produced with everyday software tools (i.e., MS Word, Design Science MathType, Open Office). We will demonstrate through examples the steps needed to create accessible digital documents with scientific contents (formulae, equations, expressions, and diagrams).
10.5446/21327 (DOI)
Okay, good morning, my name is Emilia Bergzon. I'm a project manager at BDICOM at the Department of Research and Development. BDICOM is the organization in the Netherlands that makes information accessible for people with a print impairment. So we make information accessible for people with visual impairment and this includes for instance blind people but also people with low vision and a large target group also are the dyslexic people that Neil already mentioned this morning. I was asked to give it to tell you something about Daisy. Now I know that part of the audience is very well familiar with Daisy but my talk is more aimed at the people who don't know so much about Daisy. So well my apology to the other half maybe of the audience. What I'm going to talk about is Daisy, what it is and I'll show you how it works. I'll pay especially some attention to mathematics and Daisy and I'll tell you something about the current developments. Well Daisy, Daisy is an acronym. It stands for Digital Accessible Information System and it is a digital format to represent multimedia information and multimedia information in this case and you could think of audio, text, images, and in the future also for instance video. Daisy is an open standard. It's worldwide in use in many countries and it's developed and maintained by the Daisy Consortium. The Daisy Consortium is a construction that was formed in the 90s and there are many... Oh, okay. Well, this is a bit confusing. Okay, maybe I should say more like this. Didn't you get anything of what I said? Yes? Okay. So Daisy Consortium is composed of organizations who work especially for visually impaired like libraries for the blind, producers for blind and visually impaired people, but there are also many friends of the Consortium, vendors of equipment, developers of new tools, etc. I think the best thing to do is first show you, give you a little demonstration of how Daisy works. I think the starting point to keep in mind is to think of audiobooks or audio cassettes. We had that in the Netherlands and in many countries that was one way of providing blind people or visually impaired people with a means to have access to content, to books. When you have for instance a textbook when you're in school and it's a bit of a larger type of textbook, then you would have typically like 10 or 20 maybe cassettes, well all of them having two sides and the only way to navigate through them will be fast forward or rewind and know which cassettes to take to pick. With Daisy there's a huge improvement in the navigation through your content and also now I have to put it on this. We'll see you guys. Welcome by Victor Reader. With Daisy you will have, you typically will have your book on one CD or only SD memory card for instance, so that's much more portable than having like 10 or 15 cassette tapes and the book will be much more structured. So I've put in a CD in here and I've picked an English exporter book so that maybe you can follow what's being said but Daisy player is a Dutch one so the acoustic signals coming from the Daisy player will be in Dutch but I will tell you what's happening here. So this is a very simple version of a Daisy player, very suitable for people who don't want a very complicated device. There are much more sophisticated ones that you can look at after my talk. This one has big buttons like a cassette player for play, fast forward and rewind. So first I hope this will work like this and I'll let you hear with you how it sounds. Popular culture and introduction. The aim of this book is to first introduce students and other interested readers to the starting contemporary popular culture. So that would be just play and stop but because this book has added structure to it I can navigate through the levels of the book and levels of the book would typically be the chapters of the book, the subsections of the book, the preface and I'll let you hear this. So now it's at level one. I'm now at level one. Level one will be the chapter level so if I go to the next one within this level. Chapter two, television. So you will hear it will go to chapter two if I go to the next one. Chapter three, fiction. So this is a way to navigate through the book a bit like a sighted person would do, just search for the next chapter or the next subsection. You can also navigate on page level so if your teacher says it tells you look at page 87 in your book, you can with the more complicated version of this player you can type in 87, go to that page and you're immediately there where your classmates will be. Or if you have to do your homework you can find the paragraph where you are supposed to do your assignments. Another nice feature is that I hope you can hear that, is that I can speed up or let it go slower. Reading the Copula. Big culture and environment first published in 1933. There are more settings that you can adjust but this is to show me that a user can do a lot to accommodate his preferences. On the screen you can see I've listed some of the features like the audio is rather good quality even when you make many copies of your master recording. It's unlike with an audio cassette recording where the quality diminishes, here the audio quality stays the same. And also when you use your book fast forwarding, revining, etc. it doesn't spoil your book. The possibility, well I already mentioned you don't have all these cassette tapes, there are several levels of navigation, several ways of navigating through the book, you can adjust the rate and there are several other settings that you can change. Now I just showed you a hardware player and there are many, here is a smaller one, but this is a very sophisticated one, you can do much more with this and this is a very portable one, especially for blind users, it was designed especially for blind users. So there is no screen on it, just buttons, you can feel what you can do and everything is in acoustic signal. So I showed you these hardware versions, there are also software versions to use on your computer. And this is one example of them, this is the GH player and I have permission from the vendor GH to use this, this is an early release, it's not yet the final one, so it's not perfect yet, but still I can show you what I mean. First I will show you a simple book, quite a simple book, it's an Italian book and here you can see that you have the text and you will hear the audio, but the audio is recorded with synthetic voice because I didn't have an Italian reader at home to record. So I hope, well we will see how it sounds to you. A software player will typically have, at the bottom you see some buttons to play and go forward and go backward. There are some navigation, there is a section where you can use to navigate through the book, so you can go by page level, you can go by section level, you can even go per sentence or for some groups, even per word, you can go through your book. And at the top there is a button bar that you can use to set your preferences, so for instance you can zoom in. You can change the contrast in the book, you can change the colours in the background and foreground, so you can choose a different font. You can adjust the volume and again the rate, which is represented by a running character or a more slow going character. And you can change if you want, so you can, oh this is set to audio. You can also use synthetic speech with this GHP. Now I will have to find the audio. This always works, of course. Ok, I will start to play. No, I will go. Ok. Now, to me this sounds a bit fast, to the Italian people I think, no? Sorry? It's normal. So I will put the rates a bit slower. But you can see what happens is that while you hear the audio, the text is highlighted with it. And this is a synchronisation between the text and the audio. And as Neil already told, this can be very helpful for people with dyslexia, but also for people with low vision because they have some support, people with other learning disabilities. Here at the left in the screen you can see there is a, I will zoom in on that, you can see there is a kind of a table of contents. And I can go to, for instance, 3.1 by clicking on that. And this is a way to navigate through my book. Again, there is the structure, the Daisy structure that I use to navigate through it. You can see that you can even have levels beneath that. Also you can have the pages. Now on the left you can see the pages and you can say, okay, I want to go to some special page and continue my reading over there. For Daisy books now you just have seen an audio only book which I showed you with the hardware player. You've seen a text and audio book and you can also have text only books. A text only book would be useful for someone using Braille, for instance, or also synthetic speech. The reading material that they make available in this way is really everything that you can think of. So, leather reading, newspapers and magazines, text books and education, and that varies from the beginning to primary school until academic level, so universities and professional literature. Special benefits from using the Daisy format is the portability, the audio quality, no physical deterioration, the navigation that it provides. You can search if you have a text only or a text and audio book. You can really search within your text so you can really look for some words and start reading from there. There are adjustable settings and another feature is that scalability of elements. So if you are listening to audio cassette you have to go through everything that's only, for instance, if all the footnotes are in there, well, you have to either listen to them or fast forward through them. In Daisy there is the possibility that you can say, okay, I do want to read the footnotes too or I want them turned off and that goes for footnotes but also for other elements that you can have in a book. Well, we are here for the accessibility of math and sciences. So I will talk a little bit about math in Daisy. Until recently you could include math in a Daisy book but only by providing it as an image. So a formula would be turned into an image and an image in itself is not very accessible to people with visual impairment. So you would have to add some text and that text would be available in text format or in an audible format. But you would have no navigation within a formula. So you could not use any user-divide settings to represent this formula. And because the Daisy Free Standard has this possibility to extend the Daisy Standard with other standards, the Daisy Consortium said, okay, let's install a working group to look especially at this problem. We extend Daisy so that mathematics gets more accessible. The working group has, well, I'm part of that working group and several members of the working group are here today in the audience also. The working group has actually come up with a modular extension to Daisy. It was approved last year and it also happens to be the first modular extension to Daisy so we hope that other applications will follow. At the right you can see the Daisy logo and there is the special one where MathML is added. To extend Daisy for MathML, for Mathematics, sorry, we used MathML. MathML is a W3C standard so you're quite safe at that side. Math is saved as structured information, not in some specific rendering. It allows you to present whatever content you have in a wide range of formats. For instance, again as a graphic formula, two dimensional one, or as real as text-to-speech as a large print. Also, MathML enables you to navigate at a very fine level through the formula. I will demonstrate that in a moment. Again, we have the possibility of synchronizing and highlighting audio and text even within a formula. I'll show you an example of this in GH Player. This is really special because in GH Player they already have an established version of GH Player for all the Daisy books that are around. This one that I'm showing you is already prepared for the use of Math. I'm going to open a Math book and now you can see a splash screen coming up that says Math Player that Mille talked about this morning. It's the case that GH works together with, for instance, the right side and uses the technology of Math Player to make Math work. I have an example here of a book with formulas in it. It is a Facebook, or rather, a Facebook. It is an equals fraction y minus y sub zero over x minus x sub zero, n fraction. Area n circum. So what you could hear maybe as you do the formula again, what you will hear is n equals, and then it will say something like start fraction or begin fraction, and, sorry? Just fraction. Just a little bit of assistance. Technical assistance. So I'll let you hear this again. And over x sub zero, from y sub zero, close is n equals fraction y minus y sub zero over x minus x sub zero, n fraction. Okay. Now I'm thinking as a student, well, this was a bit fast. I want to explore this formula. And now I go into the word navigation. The area is a simple root of radius in order to be represented by the variable. Word navigation, right? So if I go again to this formula. And equals fraction y minus y sub zero. Okay, now in the middle of the formula. And now with my arrow buttons, I can navigate through it. Zero over x minus x sub zero comma. So what you will hear is x minus x sub zero. And I can also go back. x sub minus x zero minus y, fraction equals. So maybe for this formula, it doesn't seem so impressive, but there are very huge formulas where it can be really practical to go through it part by part. Can you skip? Can you skip the formulas? Can you skip some expressions? Well, yeah, you can go to another level and then it just reaches the complete thing. Like in Montag. No, you don't mean that. It was not in the case. Yes, it was in the case of something in parenthesis or you could say, Skip the denominator. Skip the denominator. No, that's not there at the moment. But because we use Mark Mel, you could make it like that. So you could have your application do that. So I just showed you a math and math aware player. I love the question. Okay. Yeah. In regard to the previous question, sometimes it's valuable to be able to characterize some part of the equation. Can you hear me now? Sometimes it's useful to characterize some parts of the equations as, for example, parenthetical parts of the equation as just that without explaining that. Can you do that? At the moment, that functionality isn't there, but because you save all the information in Mark Mel within the Daisy Book, you could program something, you could add this functionality. It's not there yet, but it might be a good idea to propose that to, for instance, GH and tell them, well, I would really like this functionality added. Yeah, like in the math, Jimmy, it would say x equals, begin fraction, sub-expression, and then something else, component one, and then you can go back and dive into that to better understand it. And sometimes that's really good for editing. People want to get an idea of the structure we have to make. Exactly. So this is just the beginning of this. And again, because it's all in Mark Mel, you have this structure, you could make use of that and have some program or some application that says, okay, first I do the course level, and then I go to the fine grade level. Math and L for a handy first, tell you where Tom suddenly put it, beginning and end, because it's tree structure. And that's what the math tree does, it really uses math and L to its fullest as one of what you really implement. Are you planning on implementing those features in the future? Well, I'm not from GH, but I can certainly suggest it to them. Well, I can find the wrong one. Okay. So an advanced Math and L player, as you just saw, will have a native understanding of Math and L structures and semantics. It is able to render and navigate within the Math and L content in an intelligent way. We may be even more intelligent in the future with the suggestions from the audience. The Math image is created dynamically, and the reader is able, using such a player to adjust the way how your Math is read. Again, this is also in the beginning stage, and there are many ideas about how you could improve on this. So the production of these books in general would be done in production centers, in countries, in Italy you have them too. So you will have studios for recording of books, and you will have text production for braille or for magnification purposes. You can also have, as an individual, you can also produce crazy, and that's what I wanted to show you, how you can yourself at home or at work, and I will use a daisy document. I will use, it's very short outline here, I will use Word 2007, it will also work with 2003 and Word 2002 or XP. I will make a Word document with Math in it, save it as daisy from within Word, and then play it in the player if there's still time for that. So I'll show you. This is Word 2007. Of course. Yeah, I can type in that to save some time. I already prepared the documents. Oh, this is not a lot. Okay, yeah. I added some profiles, so I used the profiles, the styles in Word, so this is title, this has a title style. There are headings in it, some headings, and here you can see at the bottom there's a Math formula, and I made that with the equation builder that's provided with the Word. Then I go over here where you have your normal buttons like new, open, save, etc. And you can see I have installed a save as daisy add in, which you can download for free. If I choose that save as daisy, a window should appear. Yes, there it is. It will take the title, the part that I marked as title. It will add some extra things, and when I... Oh, this wasn't really... Okay. That will translate into... it will take the Word document, including with the Math, and it will translate it into an XML document, a daisy XML document. And without... okay, okay, successfully translated the document. And when I go to... now there is... there should be... Oh, here it is. So there is a MyState in Milan, XML document, and this document will have Math in it. I can put it in the G8 player just like that. Anyone that pursues MyState in Milan, Chapter 1, Saturday. So now I'll go to the math part. What? To point to do some mathematics. Let me insert an equation. Open X, put it so close, sprint, email. Yeah, okay. So I'll go to the math part. Can you hear some laughing? As I said, it's not perfect yet. So there's a slight mistake in the translation within the formula, but this is the fine-tuning stage where they are... As I said, this is an early release, and I'm already very proud that I can show you this. Because this is directly... you're just sitting at your desk, you open words, you make your work document, you insert your math, and you have a ready-to-play Daisy XML file. The Daisy construction also has a Daisy pipeline, and part of that is that you can take this Daisy XML file, put it in the pipeline, also for free, you can now use it for free, and it will turn it into a complete Daisy book, which you will be able to play with your Daisy player, your hardware Daisy player, for instance. So I'll leave it to that, due to time. Maybe there are more questions which Berna maybe will not allow. So you can talk to me in the break, so you can talk to me in the break, and also if you would like a demonstration, an individual demonstration, I have three types of Daisy players and software, please come to me and ask me on that. Thank you.
DAISY is a digital format for the representation of multimedia information such as text, audio and images. DAISY is an acronym for Digital Accessible Information SYstem. The open standard has been developed and is maintained by the DAISY Consortium. It has been adopted in many countries worldwide to provide accessible information to people with a print impairment, such as blind people, people with limited vision and people with dyslexia or other learning disabilities. The standard is used by production centres and libraries for a wide variety of information, ranging from leisure reading, educational and professional literature to newspapers and magazines. Delivery formats include audio-only books, text–only books and hybrid books containing synchronised audio and text. DAISY books can be played with DAISY hardware and software players. The standard offers rich markup options which enables users to navigate directly to specific sections, chapters, pages and sentences within a book. Other features are searching within text, resuming the last reading position and adding bookmarks. In 2007 the DAISY standard was extended to include MathML. Previously, mathematical formulas had to be inserted as images accompanied by a textual or audio description. With the new extension formulas are accessible in itself, making it possible to explore them step by step. Recently, manufacturers of production tools and DAISY players have been incorporating the processing of MathML in DAISY books in their software.
10.5446/21329 (DOI)
I'm from Kishin University and also I'm the head of Impty Project and also a non-procurement organization, Science Accessibility Team. Impty Project is a research group of mathematical information processing and mainly we are developing ocean software and environment to deal with mathematical document information using interface or converting. And also we can find on this website of our project related papers and some database which is useful for developing systems concerning mathematical documents. And Science Accessibility Net is an organization to help visually impaired people in scientific fields. We develop products and release and maintain the support of the software. And also we are organizing some special events for visually impaired students and some kind of some account. And also we are helping students and professors working on science making documents accessible. Now today I'd like to speak of Impty Leader, a new aspect of Impty Leader. This is Impty Leader is an ocean software to recognize mathematical documents and output the result into various format, XML or including a MathML and Radek and MS or 2007 as the current states. And this software can be used to convert scanned images or papers or PDF files to IML. IML is our original XML format and we have a chat in Impty which can speak aloud the recognize documents. And also the blind people can edit scientific mathematical documents using speech output. And on the other hand, not putting the result into Radek or MassML or world, we can use it to convert them into various formats. And also we are making, we are currently working on a breakout output and easier output from these formats. Now I'm that short for one example of demo. I have here some page of scanned images of geometry textbook. Now let's recognize this page by our software. So it takes some time to recognize it maybe 10 seconds for a page. Now you can see here the recognition result of the document, the page. Here you can see the original image and here the recognition result. And this result can be easily edited by using key operations. This is our editor and chat in Impty is, this is in the editor, which we released. And chat in Impty is a programming software to Impty editor with speech which can speak aloud mass expressions. But I'll skip the demo of chat in Impty. Now let's go back to my presentation. So Impty RIDA has a different flow, consists of several steps to digitalize documents. First task is the layout analysis, automatic segmentation of text area including mass and figures and tables. And then line segmentation. The use task is some difficulty by the existence of mass expression but I'll skip it. And then recognition of ordinary text and separation of mass expressions. This is done simultaneously. And then recognize the extracted mass expressions and output the result into various formats after logical structure analysis of documents. And today, digital concerns the second step, recognition of ordinary text and separation of mass expressions. The current version of Impty RIDA uses two commercial OSHR engines inside. And our Impty project has our original OSHR engine. So text part is recognized simultaneously by using three OSHR engines. And the combination of different OSHR is very efficient to improve revolution rate and explain it later. So today I'd like to explain our new trial to improve our system to recognize different languages. Since the current version of Impty RIDA recognize Japanese documents and English documents. And to some extent, French and Japanese is OK. But for other languages, you will find many mystery conventions. And for Russian document or for Eastern European language with many extended writings, some of them cannot recognize them at all. So now we are trying to include our final user engine inside. We arrived to agreement with AB and currently the implementation is going on. So I'll go to demonstrate it later the current stage application. And to include different OSHR inside is not straightforward. Even if you have very good OSHR engine, it returns. We cannot use it directly. It returns always a strange result for mass media expressions. Since it can use OSHR engine to try to recognize everything as usual text. So we should deal with the result of OSHR. And user commercial OSHR engine currently recognizes very well ordinary text. But if the document includes mass media expressions, even the ordinary text part, a mystery commission of ordinary text part near to the mass expressions increase. So we combine different OSHR to correct each other the commission result of ordinary text part. For example, here you can see original image of some world. This is Japanese address. And here is a revolution result of three OSHR here. And the original space spelling is M I I H A M A I from G U N. And the first OSHR engine returns N and U will load instead of I I and H A M A G U N. The second OSHR recognizes the first part correctly. It returns M I I H A R M A. It returns R N instead of M and A G U N. The third one recognizes M I I H, but it cuts H into Slashan D Ota and A and M A G U N. So from these revolution results, we can get good results in the following way. We first cut all the original image by possible segmentation. It is some kind of over segmentation. And then from different possibility of segmentation, we get M I I H M A G U N. We can also select the better one by using voting method or something like that. This process can be implemented easily using dynamic programming. But the algorithm is simple, but we have to take care of the score of the OSHR. Usually, the score of OSHR is some kind of reliability of character recognition. But usually, at the current stage of research, the score of any OSHR is not reliable. So we have to balance those scores. It is the subject we are managing since we introduced new OSHR finally. Here you can see another problem. You can see here some kind of recognition result of text including mass by Komoshiro Shiyan. Text path is recognized correctly in almost all worlds. But for mass expression, it returns some result. It doesn't reject usually. So we have to throw out these strange expressions. To do that, we find a basic line of text line. We reject the output of Komoshiro Shiyan with the compatibility of size and functional character. In this way, you can throw out this strange result path. It remains some characters which should be sent to mass expression path. In this path, we use basically language information. And I will show some demonstration. This is a new version of E-reader. The appearance is the same as the current version. But the only difference is the language selection part. The current version is the selection of Japanese and English. Here you can select check for example. And select file. I think I have prepared some check sign. I will show some original image here. You can see one page of check paper. This paper contains mass expression but it is not a problem. Let's start the show. This message is not corrected. The implementation is going on. But you can understand that it is working. So now, we can... This is some image here. This path is recognized as image. This is a title. And the content is check text recognized basically very well. But you can observe that here there are some blue characters. Blue character means it is an image reader judged as mass expression. And this isolated character is treated as mass expression at the current stage. Since check word dictionary is not yet included in our current version. So to improve the result we need some dictionary words. In infidery we need not full dictionary of all the words. We need only short dictionary of short words of language less than high. Which is sufficient for our software. So thank you. Thank you for your interesting presentation. Are there some questions? One message. We haven't yet dictionary of different languages. But I need output recognized very many languages. So to adapt to languages we need list of basic word dictionaries. If some of you could send us dictionary of your own language we will be happy. How did you say for the very simple inline math that was maybe just a no baseline shift like A plus B. How do you recognize that that's math instead of just text? We compare the result of infid software. We can manage the reliability of the result by infid system. If we return a green character with sufficient reliability we understand it as math. And for text sometimes there are some characters with function name with F. So it is difficult problem for checking if the A is B and this A is math. But if you have a function the I is text. So we need dictionary or some kind of linguistic information of those. Further questions please. When you are using multiple recognizers do you use some sort of Bayesian voting system or something to decide which answer to accept? Or as you said you use three different OCR systems. How did you decide which was the best answer? I didn't explain the algorithm but we use the dynamic program. Selection of many, from selection of best score path from many possible paths. Further questions?
InftyReader is the software developed in Kyushu University to recognize mathematical documents including various formulas of pure and applied mathematics. It uses commercial OCR engines to recognize ordinary text parts. One of the crucial points to keep high accuracy of the recognition is the segmentation of the text area and math expression area to combine commercial OCR and InftyOCR. Recently, we are trying to use the OCR engine of ABBY FineReader to adapt InftyReader to various European languages. In the talk, I will briefly sketch the methods to combine different OCR engines and will give some demonstrations of the current state of our New InftyReader.
10.5446/21330 (DOI)
Good morning. So I'm going to talk about, OK, we just found some networks. I'm going to talk about UMCL. If I can find my mouse, can use it. OK, UMCL is a library that provides brand transcription for mathematical applications. So, my presentation today, well, I will start very, very shortly just to say that access to mathematics has always been particularly difficult for visual impaired persons. Everybody here is convinced, I guess, and that assistive technology may provide useful support for these visual impaired people. The problem is, if you look at many applications that have been developed in the last 10 or 20 years and that support braids, they are stick to one particular break code. So, the idea of UMCL was to provide some programming library that allows assistive technology developers to include in their application the support of value break codes. But before starting, I just want to make a free ad, because it was not planned when we had the summary of this. It was not planned yet. Something different, but in the same field. We also developed a plugin of OpenOffice that allows to make, to export, DT-book XML. And very recently, this exporter has got an award from the OpenOffice Community Innovation Program. So, if you're interested in that, it allows to make an accessible document. You don't need to have this version of work and this version of an exporter. Just take the last version of each. So, this is not a topic today, so I can talk about it later. So, now about brain, mathematical brain. Well, we know that rebrain was a musician, so there is musical brain that is used in most countries. But it was really not intended in mathematics. So, after that, 150 years, or maybe 200 years later, we have, I counted recently, about 12 or 15 different break codes. In France, we still have to deal with three revisions, because people have learned it in 1970 by for instance. You don't necessarily want to learn the 2001 version, etc. In Germany, there are maybe as many break codes that land us. It's a bit exaggerated, but we do it a bit, wouldn't we? British people and American people, of course, don't use the same break code. It would be too easy. There is break code in Italy, in Japan. There is a Spanish which is quite used in South America, too. There's a Chinese one, etc. So, let's see now the state of the art of very quickly, of fact, assistive technology to support math. There are three main cases of products. Products that convert to produce brain documents. You have applications that help to read or understand math, like for instance, the math player. And more recently, we made several experimental applications to help manipulating, calculating, solving, doing mathematics, not only accessing mathematics. So, now if we have a quick look to this application, for instance, I have recently, couple of years ago, somebody in my university has given me, you know about math on visually impaired brain, so do you know a good converter from late-day to brain? I said, this was coming from the support center for visually impaired students in my university in Paris. So I said, yes, I know a very good product. It's called Bramannette, sorry, Labrador. Labrador was made in Austria and it produces marble from late-day. But no student in my university, no marble code. So, useless. Four French students. Very useful for German and Bosnian students, but useless in French. Of course, we have something going from mathematics to French writing. There is also a mathematical death rate, mathematical nemat exists somewhere, and we have different tools that produce this rate, but each in a specific language. In the state of the art, there is also a very good product, which is a infity that allows to make both the air, recognize mathematical formula from paper or from images, and it produces mathematical, polysbrite and light-tech, etc. So, quickly, in the reading and understanding, we made with my group some work on a reader for math. So, we've seen that the math player already supports a lot of speech synthesis, a lot of different languages, and the next version, we support some brain, too. Also, the math genie from Karchmer provides some large graphical formula, which is the one I've put in the mix, speech synthesis, and I've put in the mix. So, if you look at the last category I said, doing math, there was the lambda project. The lambda project is a bit different because it has its own code, but will be described later today. And with the university online and my group in Paris, we made some experimental software on my rent, and it will be described too later. My rent is based on using UNCL to do some output in brain. So, now let's see what UNCL is. So, it's thought that it's to build a generic framework that allows different applications to use different brain codes. Without having to know when to develop the application, with which brain code it will be used. The architecture of this system is very simple. You have a main module, and this main module speaks with the application. And on the other side, the user can install, or have installed on his computer, values output, output modules that will allow the conversion he or she needs, actually. But the system is, the modules are completely independent from the application. They are only linked to the application by the main modules. But when the application is started, it will automatically detect what modules are present in the system. So, you can install an application that supports UNCL, and later on, install a new module, it will be recognized, and then you need to change the application. So, the main features was, to start, the point of this was to ensure the interpreter will be of all converters, and it can be used in any application. What is the problem here is that we developed a model on the brain translator that allows to support advanced features in manipulating, in the doing math case. First, if you just need to convert, you have a math ML input, or a related input, and you want to do some braiding, it's one way, it's quite simple. But if you're working in an application which needs, well, you need to be able to navigate, to close a branch of the math expression, to make a shorter view of the math expression, then you need some additional features, and then, I mean, you need to, also, if you want to be able to synchronize, then if you want to allow the user to point on the brain display a term of the expression, say X, and you want that on the screen, the teacher sees this X on the graphical formula, appears in a different background, then, the two versions of the formula need to be linked in a way. So, the UNCL supports, as a model, we support this. So, it's based on a sample language, this is something that makes it also easier. This sample language is a smaller version of math, it is math, it is very math, in which several structures are, how do you say, forced to be in one only way. So, we call it canonical math. The good thing with a canonical math is that, then, for each grade, you need to have only one converter. You don't need to have one converter from IK to Italian grade, one from math to Italian grade, one from Arbor to Italian grade. You just go through this sample language. So, from the application developer's point of view, the application only needs to access to the main module. It's better to have output modules installed on the computer to test it, of course, but theoretically, you don't even need it. And it's usable in most programming languages. First, it was written in C, the most portable, the most tolerable C possible, and in XSLT. The C library can be used in many languages. Natively, it can be used in C and C++, but then you can develop wrappers for other languages, which is quite easy nowadays. And we all really in the package provide wrappers for Python, Java, and soon for PHP. The output modules can also, depending on the need of the programmer, use directly the XSLT stat sheets in the case of a translator based on an XSLT stat sheet. Then, as I said, the main module will automatically detect the existing modules, and you can have, for instance, in your application, a setting box where the list of existing modules will be automatically read from the system. So, you install a new one, you restart your application, and you get the new module in your list. From the user's point of view, if you use a UNCEL-enabled application, you just need to install the module you need. And then, well, this can be, of course, provided with the application, because modules are also big, but afterwards, the good thing is that afterwards, you can still install new modules. Then, from the module developer point of view, you can have some people outside of, independently of UNCEL application who develop some modules, just have to follow the interface, and it's independent from the development of the main module. So, it can be any people in any country can build a new module. So, let's talk about the license now. It's open source, so it will be available. Technically, I opened the source source project recently, but it's not yet on the source, yet on this side, but it will be soon. Ah, soon as I clean it. And then, it is free to use. It is free to modify, and it is free to redistribute. It's according to the new less-generated public license. It can be used as well in a commercial software or in a free software. So, if you develop even a commercial software, you know, we don't use this. There are some specific license for free software that forbid, we decided not to use it. So, we finished the existing modules for the input module, where essentially mathemes can input mathemes. We can input mathemes, that's the minimum we need to start. We are currently working on a latex mathemes, and we are using an external library for that. So, it's on development. It works on some platforms, not yet on a platform, but it will be soon available on our site. The results saw some first version of the module to mathemes. It was developed in the University of Leeds, and these are not yet available, but it's still working. And some also previous version of IML, two kind of equal mathemes, developed in Japan. IML is one format of infili. The output modules now, we have one, it's a big output module for French that has three different release. So, we have parameters, it's the same module, but you tell what release you want. 1977, 1971, 2001, 2007. There is a Mathemet to Italian. This is working pretty well. Mathemet to Marble. This one is not complete up to Gymnasium, but it's still already complete for younger students. We have Canonical Mathematical NEMET. This one is full now. And we have Canonical Mathematical British Grays, which is at the last phase of development. It works, but you need to tune the results. So this is mostly what I wanted to say about UNCL. So the only way, the best way to help would be like I said Neil just before. If you're working with an application that does not support breakouts, or only supports one breakout, you need another one. The best thing is to tell the vendors that this exists and they can use it. Thank you. So the last version of this, we have an online demonstration where you can pass some Mathemet and see the grade. You can see the grade on the grade display, or you can see the grade on graphics on the screen. Depends on it. This will be very soon in the next month put online. And if you're interested in this, you can look on my website for Shedon.net. I put here all the information about UNCL. Thank you very much.
Over the past decade many applications have been developed to aid visually impaired people doing maths. Unfortunately, most of these applications work with only one Braille code, the one in use in the developer’s country. For example, the support centre of my university recently asked me if I knew of a piece of software allowing transcription of LaTeX mathematical documents to Braille. Indeed I do: Labradoor, developed by the University of Linz, does exactly that but produces Marburg code, which is of no use to French students!
10.5446/20942 (DOI)
Yna ac nes ydych chi allu'r digon a y gallwch i ddeg ar hyn owa hwm!<|cy|><|transcribe|> Wel bod fi wef ymlaen darlo drwy pethau warnol mae yma wedi bod y dri poems tuers yn протi ar devastated, Felly mae'n gweithio'r llwyddiadau yng Nghymru yn ymddangos i'r llwyddiadau yma, mae'n gweithio'r llwyddiadau yn ymddangos i'r llwyddiadau, sy'n gweithio'r ysgolwch yn ymddangos i'r ysgolwch yn ymddangos, yn y 13 yma, ac mae Jeff Blyth, Mike Madorra, a'r cymaintololigau cymaintoligau i'r llwyddiadau yn Gwynnig, ac mae'n gweithio'r llwyddiadau yn ymddangos i'r llwyddiadau yma, a'r gweithio'r llwyddiadau Johnson, yn ymddangos i'r llwyddiadau. Felly mae'n gweithio'r llwyddiadau, mae'n gweithio'r llwyddiadau, mae'r llwyddiadau yn Fauthcalled vegetables, gwentwch oedeng, i'r llwyddiadeth flowshawdau. Os yw'r ddechrau? Dcymunig. Mae'n amhwytau arddangosol ymddangosig. mae'r f Character Times Llyfr of<|am|><|translate|> Dwi'n gweithio i fawr i lleolio'r exter, i lleolio'r gweithwyr yn Llyfr Ffyrgoedd Fy如此io st Valve류cill sydd eisiau ein gw huhwch am ei incher oedd y barrel. Mae bod fwy'n amlion diwymosiaeth am y Gareth Bleth, nhw'n credu f Temple Pretty who will provide many, many chances. Mae gyms Solidariadothers, ond ac mae antliogaethlethau振fodaeth feddiw yn fawr. Felly, rydw i'n dweud y gweithio. Rydw i'n rhaid i'n dweud ymlaen i'r fforddau bod yn ymdweud y byddai'n gwybod ychydig o'r oedden nhw'n gweithio. Felly, rydw i'n dweud ymlaen i'n gweithio, mae'n gweithio'n gweithio i'n dweud ymlaen i'r oedden nhw. Mae'n gweithio'n gweithio i'n dweud. Mae'r dweud yn ymlaen i'n dweud yma, ac mae'n gweithio'n gweithio i'r camera o'r hollograff. Felly, rydw i'n dweud, mae'n gweithio'n gweithio. Felly, rydw i'n dweud y byddai'n gweithio, ychydig yn thr felly eithaf amwneud o Emsud ihrad yn codi rhoi, ond yn ajkayn Oh Y Ysair i Gweithio dieser ты læs fourê chlas sectors replied ast ti Help. a mae'r bach wedyn i weithi ychydig sydd oedd wedi digon Street Reg rehearsal a'r bach yw mwyaf. Felly mae'n hacerse i'r ffodograffw y Thream, nemcofio'r newid o'u eu bellach. Rwy'nlineg yn per YouTuber mae A Maydenbeth jurisdictions adresfael trustedd mewn이지w o'u tanahello, Mae Mike Daddora I am ac mae'n gweithio'r Cwm Harrison yn y cwm. Ac mae'n gweithio'r Gallari Tyn yn y museum. Mae'n gweithio'r Cwm Harrison yn y cwm. Mae'n gweithio'r Cwm Harrison yn y gallu'r gwm. Mae gennym. Cy докумente Chase y bydwn yn y cofi hills hei. Ac mae Jeff yn ceisio'n eu waith ac legacy yn y byd. Ac mae Canelon yn mynd o myg a mydfodol. Sirop drannwchiau yn anned, oes o'o thefer, 빠�ffio'n gweinir. Abell o bobl am ddos y pender 했. Garwch Felly, mae gennym o sixe'i chdoffa, i'n thyfan theypo. Felly, mae gennym omal. Mae fram yn beams mewn bufyd. Mae gennym fy am ddechrau mentrill o's o t红 gymhau o'r aelod ydych chi, felly dyma'n dir Lynod. Ond yna'r ei hyn o之f ynaio ond ynddi, ond mae gydag oeddo yn hunain. Panyddio'n flynydegdreis â'r cyllisedf�atwyni sy'n gallu dweudo yr ir generell judo, a'i wneud i sy 축arius cerddo mor cuenta i'w ddweud y hologram yn ymddiolol. Mae'r ddweud. A'r ddweud yn y blaesol, mae'n ddweud yn ymddiolol o'r ddweud. Mae'n ddweud yn ymddiolol, Mike Mador. Mae'n ddweud yn ymddiolol, mae'n ddweud yn 4 o'r ddweud yn y pethau. Mae'r ddweud yn ymddiolol o'r ddweud. Mae'r ddweud yn ymddiolol, mae'n ddweud yn y pethau, oherwydd y mynd i gyflwybu'n ymddiolol, mae'n ddweud yn ymddiolol. A o'n oed o'r ddweud, mae'n ddweud yn y mhwng i'w ddweud. Mae'n ddweud yn y pethau. Mae'r ddweud yn y blaesol, Ichon ti'n mod i phonig d Garrion nawr, a'r prydiau Ym parrysiau'n gwol'na blynyd settled, ac swyddiad brydy yолаf er mwynch. Felly ble mae'r masau fittingio hvalig, Honwys i ni i weld gweld. Ond yn y dal, mae'n oedlai şp Erbyn nawr, gref yn de allan Guild arwylo, sy'n drwsio rhoi maer시는g iawn,� weddingsrrif ei fyncfrannu gwahanol yn ddeuddi还auohnrion, reinforceor ym oo, spideriad yn dweud lleol liolion thafor arno. Mae'r ddweud yn ddweud. Mae'r ddweud yn ddweud. Mae'r ddweud yn ddweud. Mae'r ddweud. Yn ymlaen, wrth gwrs, canolodd y cyfnod o'r ddweud? Mae'r ddweud. Yn ymlaen, wrth gwrs, canolodd yn ddweud. Mae'r ddweud yn ddweud. Mae'r ddweud yn ddweud. Mae'n ddweud. Mae'n ddweud efallai ein ekonomiau gyda'r hir yw'r gyfer siw ëmlaen. Yn ymlaen, wrth gwrs, canolodd yn ddweud. Mae'r ddweud efallai mae'r ddweud. Mae'n ddweud efallai mae'r ddweud. Im ond musnad ar gyfer yr un olygu eisiau, nid d Gymraeg yn y gwêr notifications.
On the evening of 13th March 2008, between the hours of 6:00pm and 2:00am, five reflection holograms were recorded of John Harrison’s fourth timekeeper ‘H4’, at the Royal Observatory, National Maritime Museum in Greenwich, London. Arguably the most important timekeeper ever made this watch finally solved one of the greatest scientific problems of its time, that of finding Longitude and marked the beginning of accurate global positioning. In recent years public awareness of the watch has witnessed an unprecedented level of popularity, together with a string of authoritative writings including the release of Dava Sobels book, ‘Longitude’, with introduction by NASA astronaut Neil Armstrong, a filmed drama adaptation and even a television sitcom ‘Only Fools and Horses’ where viewing figures reached a record twenty–four million. The watch, its history and its place in history, remain subject of fascination and curiosity. Now its journey to hologram is traced in this paper through the events of that March evening.
10.5446/20944 (DOI)
Well, thank you all for being here. I'm happy to have the opportunity to be here with you guys. When Jonathan asked me to give this talk, it was really surprising because I began to think about how things are different now than they were 12 years ago at the Nottingham Conference. And everything's changed. One thing that came up, Nottingham, was we still had people with scholars saying we can't yet say we can't hear very well about the holographic, but we can hear none of the girls who had it in their head. There wasn't any kind of holographic vocabulary yet. And that didn't seem to make sense at the time because we saw in the holographic who were there always really strong conceptual pieces that did have their own language, and there wasn't anything in the holograms themselves that precluded any kind of formal discussion on it. So it was a really, really odd situation. But one of the fundamental things that was lacking then was a critical dialogue around other new media that would sort of set the stage or help the discussion of holography along. And the holography was very much on its own. I think conceptualism was quite far advanced compared to a lot of the things people were doing in digital art at that point, back when they called it digital art. So now what has happened in the intervening years? Huge change. Huge change in education, huge change in the demands of conceptual artists, huge change in theory and criticism. In the mid-90s, I've been thinking that, at least the situation in the United States, was that very, very few people were reading continental theory, cultural theory like Baudrillard and Alberto Echo and Duluz, and people like that who talk about holograms and they talk about hechor reality. If you did learn about those things in your cultural theory courses in the mid-90s, you got them in graduate school, you didn't get them as undergraduates. What's changed now is that, in the art major, focusing on new media in even a college or university, not even in art school, you're exposed to the concept of the simulacrum. You're exposed to these concepts that would have made holography a lot more understandable had critics, had curators all been aware of these concepts. So now, as the museum, a few weeks ago, following a bunch of college students around, they were bringing up these ideas. They weren't quite sure about what they were talking about. They were familiar with the notion, the many of the ideas underlying post-modernism or post-humanism, that what they are seeing around them or what they prefer to see is mediated, whether it's through TV or YouTube, whatever, and that often not only is that a preferred way to experience the world, but it's often considered more real than an actual experience. So this is one of the central ideas of postmodernism and theorists like Baudrillard used holography as a metaphor for this. As some of the perfect examples, this sort of theoretical movement has gone through two generations now since 12 years ago. So the perfect simulacrum that Baudrillard would have been talking about would have been something like Hans or these amazing French pieces where there's the sense that the simulation is so perfect that in our understanding of the simulation, the real object disappears or somehow destroyed or our concept of it disappears or becomes not important. Holography fits perfectly, but this perfect sort of use of holography. Now digitally, the world has advanced. The idea of post-humanism, a world that's sort of our physical experience of the world, has moved on to embrace the concept that, yes, what we see around us is simulation, but there never was a reality. This is mainly through the understanding of digital space and net art and things like that. We're referring to things that never existed in the real world and that's now the reality of what our college students have grown up with. So it's a really, really interesting critical and theoretical time for holography to be enjoying the three births. So I suppose for the paper what I want to do is think about the kinds of problems holography continues to encounter and will continue to encounter. And what questions museum displays pose for holography today and what questions holography poses to the viewer that are different than 10, 12, 15 years ago? I'm showing you actually, this is really, sorry, really dim images. This is from the Musion website, Musion Eyeliner, which is making holograms. And actually they're the perfect hologram for the digital sort of second generation post humanism in that this is a band called X Japan, which was one of the most important Japanese bands, very, very influential. They broke up very famously 10 years ago and their very charismatic lead guitarist Hide killed himself, unfortunately. But as bands do, they wanted a reunion last spring. And so to do that they needed their most charismatic members, so they commissioned a hologram of Hide to be on stage with them. And I guess when he first appeared he was a little dim and black and white, but eventually he appeared in full color. And this is almost a perfect example of the new post humanism because he, first of all, is not a hologram, he's just a projection system. But he's approximately the right size and shape and he's in motion and he's playing the guitar for recorded track. And to the generation who was at this concert, they were young enough to never have seen the band originally. I mean these guys are all in the 40s now, the girls in the audience are all like 15. So this is real to them. They never experienced the original, right? So it's just as real as the original would have been. So it's a really sort of fascinating thing, but what we have now though is holography, the name you served, and applied to anything that fits our idea of what a hologram should be. And 12 years ago that really, really would have irritated me. But I think now, looking at it from a sort of cultural theory perspective, this is opening up a whole line of conversation for artists and other holographers. Because it's, many, many, many holograms would be more in a dialogue with something like this than they would be with other holograms. And this we see in a lot of artworks as well, but I'm going to skip that for now because I keep changing my PowerPoint, sorry. This is what we see in large shows. And I'm not referring to Jonathan's shows or Matthias' shows, because if you think back to many of the very, very large-scale holograms, holography shows from past years, the juxtaposition of the art and the commercial and the scientific was really disoriented with the work. And there wasn't necessarily much explanation as to what the intent was of the holographer. And clearly, when we look at the art that was in many of those shows, the context for those artworks, those art holograms, was really not within any sort of holography dialogue. It was within other conversations in the art world at large. So I think even today, most art holography does have more to do with the larger art world than internal discussion among holographers. So the critical and theoretical world has really changed. Now also, I think certainly the students have changed. The students in the audience are much more aware of things. The curators are much more aware of things. Curators now want to be... And I know, I don't want to generalize, but I think I'm finding them much more savvy and wanting to have something to sink their teeth in. They want to be able to write about what they're showing. And they need either some sort of conceptual or theoretical or critical hook that I don't know if they asked for in past years, but it would have probably helped holography. Students, who are going to be the new holographers? There's such a conceptual grounding in most art programs today. And it's expanding. You get fewer and fewer Bauhaus-driven art schools nowadays, much more conceptually driven. These are art students who are going out to solve problems. They have a concept. They want to express it. They will look to any medium that will solve that problem for them. It's not just... I think the people who first got into holography were coming from traditional mediums and loved holography. And wanted to see what they could do with it. They weren't necessarily approaching it, the medium, with an outside problem to conquer. And I think that focus will change... What kind of work we see produced in holography. So, I spent some time at the MIT Museum in the last few weeks or months because it's a really interesting show. And one of the things I wanted to think about was, in what ways is there some sort of intrinsic language to holography that... Or is there anything... Do holograms actually speak to one another? Or are we just imagining they do because it's all the same medium? And the MIT show is really, really interesting. Because MIT is unusually constricted in its mission. Because it's got the Museum of Holography collection, many other holograms. So it wants to show the technology because it's the MIT Museum. But it also has to make a connection to MIT through this exhibition of holography. So we see at the MIT collection, which is actually now quite a small room of about six walls of holograms. With a very, very small didactic label saying this is the most comprehensive and largest holography collection in the world. And then describing the amazing sort of pneumatic qualities of the holograms themselves, texture and things like that. So very, very simple, nothing conceptual, very, very simple. But then you're confronted... So you're not given any sort of framework for entering the show. And so it becomes, in that setting, a very, very sort of conceptually, what's the word, barren experience. Which is very interesting because then you can focus on what you're actually getting from the works themselves. And even in the context, you enter the holography room. Who's been there recently to the MIT? Anybody? Okay. It used to be a much larger room where there'd be a permanent collection and then some changing exhibitions. It was a much smaller room. And you enter... Usually you'll have either gone through the robotic wing or the Harold Edgerton wing, plus the sculptor in residence who does these really cool mechanical interactive things. Now all these shows are very heavily didactically described. The importance of the robotics, the importance of Edgerton's work, all these things, lots of material, lots of information at you. You enter the holography rooms and there's nothing. There's just the holograms. And it's very, very dark too, with just the holograms lit. So it's a very, very different experience compared to the whole rest of the museum. So what does the... And I'm going to sound like I'm slamming the show, but I'm not, because I think it's actually very interesting. But it doesn't... I don't know if it actually does what it's trying to do. When you enter the exhibition, you have four white light transmission holograms hanging in the space. And a lot of you will be really familiar with these. We've got Sessico Ishi, Steve Fenton, and two Rudy Burkhouse. And they're there. They're beautifully lit. They're in space. You have little children running around and batting them so they're swinging, which is a little nerve wracking. But they make sense together in that visual, in that they all have some visual qualities in common. They have some more coloration. They have some more spatial orientation. They're abstract. So in terms of illustrating the comprehensiveness of... Well, you can see what this display means. What can you do with this kind of hologram? What kind of space can you create? What kind of color can you create? Very, very simple. But the show becomes increasingly incoherent, and in a really interesting way, actually. Most people that I saw, if you weren't with little kids, would start off with the wall I just showed you. And then they'd often turn back, read the didactic material, and turn and start walking the wall across from the white light transmission holograms. And they tended, if they started with the didactic material, to walk from right to left. So we start with a Marie-Andrée Cossette hologram. We have a John Cousman, the National Geographic. And is that Jody Burns? Jody Burns, tell us, tell us the skills that you went through. I hope I'm right on that. Cherie? Cherie, I'm sorry, great Cherie. I'm sorry. Brain cramp. Now, they're really, really interesting. Because it's hard to know. With the coherence of the white light transmission display, we have a sense of what these holograms are asking us to figure out. With the... Let me just skip, where am I? Well, this is an older... Actually, okay, I'll go with this PowerPoint. Let me just describe the four walls first. No, I won't. I'll go. I'm sorry. I'm just going to skip. Start with this wall. Now, Marie-Andrée Cossette. This seems to be... You know, you're confronted with this image. It's a very strange little image. It's not necessarily one of the holograms. And actually, in person, it's quite small. And it's actually really, really skillfully colored. It's in various shades of beige and white. So very difficult to do. But what it seems to be is an accumulation of objects, mostly virtual. The ladder sticks out a little bit of objects taken from either like a budgie cage or a fish tank. Things like this. And plenty of little household objects. With no reason to be memorialized in a hologram. It's clearly a commercial image or just designed to show off the control of color and texture. So the wooden ladder looks wooden. The sand castle in the middle looks... is some sand colored and grainy. You really catch the texture. And that's a white sort of plaster mask in the background, which really doesn't breed as white. So it's a... There's skill there. But the subject matter is so amazingly vacant that it can be a little perplexing. And people tend to pass over it quite quickly. Now, John Kaufman's, on the other hand... This is called Stone House or Stone Room, rather... is very, very dimly lit. And usually his holograms are quite intense. And this is a real disappointment. But also, I think it's there to show texture. I think this wall begins as a formal display of what holography can do. But it's really hard to tell. Because really, these end up looking like kind of desiccated waffles. They don't look like stone at all. They just look strange. And they're very, very dim. So you peer into this murkiness and try to make something out of it. Unlike, you know, his actual holograms of stones that are quite legible and communicate clearly and are interestingly ambiguous and give you a lot to think about. Then you go to the National Geographic cover. You can see already how disorienting this is. Because, as you, you're trying to figure out what you're being told to... What you're being taught from this. So this is sort of a jolt. And you've got the laser-splendid light for man's use. And, you know, very American little eagle. So there's that. And then you move to Greg Chair's telescope, which is also interesting. Also, you know, a lot of finesse there. You look through the telescope and you see, you know, a very strange two-dimensional drawing. It looks like it's almost from, like, a middle school textbook. You know, seen from outer space and moonscape, little spaceships, things like that. So it's clever. But, you know, when you look at the four holograms together across from the white light transmission holograms, the walls don't seem to...they're on the same purpose. The white light transmission holograms clearly have some sort of dialogue, even though they have different intents. Formally, these do not. They're bizarre. It's a bizarre combination. Now, the most disorienting is this next wall, which is almost a, you know, a shocking jolt. You have Patrick Boyd's, Bartos takes the downtrend, which you read from right to left. So you're proceeding past Greg Chair's, and now you're turning the corner. And then you have Fritz Goro, a, you know, nice, crisp reflection hologram of these objects. Now, Fritz Goro is, you know, an amazing photographer, a nature photographer, a science photographer, and he's got MIT connections. That's great, but there's no reason for, of all the holograms in the collection for this to be there, really. I'm sounding snarky, but it's, he makes really boring holograms. I think he was experimenting with the media, and that's about it. And so when you have those two together, you know, what is the audience reading from this? You have Patrick Boyd who fits with, you know, today he reads very well, and you can apply all sorts of kind of cinematic theory, all sorts of visual, you know, cultural theory to his work, and it works. This, the Goro hologram invites nothing like that. You know, it's a very sort of banal image. So you begin to just, we got increasingly disoriented, but what becomes more and more apparent as you walk through the show is that these images are not really communicating on any level. What do they have in common? They have some illusionism. They have some, they do force the viewer to move and interact in a certain way, but that doesn't really reach a conceptual level with any of the holograms. And then there's a, you sort of turn the corner into the white light transmissions with the, this is the Jody Burns molecule. So here we're free together, which is a really odd juxtaposition. Okay, sorry, I'm a bit out of order here. Let me go back to the, sorry, I changed this so many times I shouldn't have done that. So what visual language is occurring here? Sorry. Now let me go to the portrait wall. There's a very interesting portrait wall, which looks like this. Basically I've sandwiched them a little bit closely together. They're actually spread out in what will hung. But we've got the portrait of Yuri Dennis Yov, Margaret Benin's Thai girl, portrait of Keith Herring, and then a portrait of Harold Edgerton. You are walking from left to right when you see these. And Dennis Yov makes sense. He's looking out at you. He's mostly virtual. Thai girl is totally virtual. You've got an interaction between the face and the surface pattern. But Thai girl, very interesting. Keith Herring is actually quite, he's in real space. He seems to be sticking his head in a funny way through the glass plate. So you have, if the museum is intending to give you an understanding of what you can do with portrait holography and give the viewer very different images to respond to, it works. Then you get to the Edgerton image. It's a small stereogram and very, very strange in that I think it's warped. It's in bad shape. It's warped at the bottom so that as you move across it, there's not much parallax. You move across and it's very warped at the bottom so it looks like someone's pulling his tie as you move across it. So it almost seems like a joke at first. But there's no reason for it to be there. It's really in bad shape except that there's an Edgerton connection to MIT, of course. So that connection has been made. Now, let's go to the four there. I'm just giving you an idea of what curators are thinking about when they're confronted with some of the strange juxtapositions we get in holography. You have another wall here again, which is weird, which is the skulls on the left, which are kind of cool because you walk past and the side of the skull disappears so you can see inside. And you've got Melissa Crenshaw's very, very dimly lit hologram of light bars in the center and then another Fritz Goro image for Life Magazine, which is a deep space image which really doesn't do much except, again, put geometric forms in space and things like the word holography. These are not communicating much, many of these. And a lot of it has to do with context. Now, the thing that's different about the shows that are circulating now, like Jonathan's, is that when you look at the show, there is an organization. They have art and commercial and scientific holography there together, but they're arranged in a way that there's some sort of communication among the works. Now, did I talk about when I, I think we're on the way to solving the curatorial issues, especially with the change of criticism, when the windows show was advertised for the MIT Museum. And has everybody seen that call for work, most of you? Can't tell. No? Okay, well, there's a call for work for the MIT Museum, which has these wonderful windows now, and it's been remodeled. And call for a holography show to be put in these windows. And at first I thought, oh, God, no, it's going to be like this. It's going to be all sorts of different kinds of holograms pushed together. We ultimately will drain holography. The holograms that do have a conceptual context that will be drained away by these weird juxtapositions. But in fact, that's a really small print. No, the curator was very, very careful. And I think it's, are you still here? There you are. To say, to very explicitly say, these are, he's displaying the holograms as a cultural medium. And that immediately suggests to most artists in the media that there's a conceptual, there's a very specific conceptual context. It doesn't necessarily go with art, it doesn't necessarily go with science, it doesn't necessarily go with commercial holography. But it's, you know, studying the medium as it functions within the culture. And that's a way to transcend all these difficulties we've had with the nebulismness of holography within a sort of critical framework. So I think it's going to be a wonderful show. And I don't know what you, you've probably had lots of submissions, deadlines. I'm curious as to whether you've got a, I should have asked you before. I mean, it's around 30. But it was a number of things in any exhibition. It's about encouraging students of it as well. I mean, I think, I'm not sure if the exhibition would have chosen as long to visorize the medium to make recommendations for the medium. Then the museum is one of the kind of final considerations on the museum. And the installation back is the final one. And will the Tiffany piece be enough? That was planned from the beginning. We used the process of trying to work out the moment to not end the exhibition. We have a wonderful sort of full circle kind of experience. So just in conclusion, it's, there is nothing inherently in holography that insists that it should be exhibited, hologram should be exhibited together unless that was the intent of the maker. I mean, there doesn't seem to be, there's a general formal language, but it's not unique to holography. It's addressing things that are found in media, traditional media, other media. And I think in many exhibitions and trying to make the exhibition coherent, we end up making it incoherent because there's no supporting foundation beyond the medium, which is, you know, was gone by the 1960s, the Saigiro medium specific world. So obviously, I'm just raising questions about this and I haven't come to any kind of conclusion yet. And so what I'm hoping is that during this year, before we all end up in China together, hopefully that maybe we can start a dialogue on this and I'd love to hear your responses and your opinions on some of these things. Any questions? Nope, okay. Thanks. Thank you.
Since its inception as a display medium, holography has navigated a confusing channel through traditional and contemporary visual languages in art. Holography is unusual and frequently confounding in its ability to access a number of medium-based aesthetic dialogues–those of cinema, video, photography, installation, sculpture and painting. The medium’s surprising and delightful formal properties and complex relationship to other mediums have often overshadowed the significant conceptual content of holographic works. This paper looks at the ways in which ways the multiple, often simultaneous visual languages of holography continue to pose challenges in exhibition and criticism for both holographers and curators. What strategies can holographers and curators employ to encourage the exhibition of holography and foster a contemporary critical response? Further, now that new-media aesthetics and criticism are ascendant, and have begun address many of the critical questions that have confronted holography over the past 40 years, in what ways can holography today engage and find a place in new-media discourse?
10.5446/20945 (DOI)
Good afternoon. I'm really pleased to be here and I thank Jonathan for the invitation to tell you something about my experiences with holography, especially with the continuous struggle with museums and with the ambivalent image of this fascinating medium. And I guess I was asked because I had a lot of I got in touch with the different kinds of museums and by the way I founded my own museum very early in 1979, the first one in Europe. And therefore it's a subject which really interests me from the beginning and I thought I will probably structure my talk which in my opinion really fits perfectly to the talk to Andy because there are a lot of similar impressions, a lot of similar thoughts from a different perspective. Therefore I would like to start with the structure which I put up. There was always a certain harm, a certain plis test from our great famous art holographers because it was so difficult to get in real art museums in the famous museums and I want to talk about the holography arts, art museums and all the other museums which are around for holography and which had the connection with holography. Holograms in the art museum are as rare as a pearl in the oyster. That means you really never found one in one of the big famous recognized art museums. Of course there are a few. It's Harriet Kestin Silver, it's Doris Villar, it's Nancy Gorklione or it's Dieter Jung I mentioned, Sally Weber has very nice installations. Yes there are artists who are able to convince curators and to have their work in real art museums but these are very very few compared with the amount of holographers which we have. In the big famous art museums like Peggy Guckenheim, like the Tate Gallery or the Musee de Bourg or in Stockholm there are no holograms. That means holograms in the premium end of the art world didn't yet arrived there. It's therefore in my opinion we have to be a bit modest and we have to wait because the way for photography was about 160 years since the first photograph was shown. I think it was 839 from NIPS in the Academy in Paris and then we had the D'Aguero to P and I would say beginning of the 60s in the last century we started to have recognized photography as an art medium and I guess we will be faster with holography but it's still a long way to go and to be recognized as an art medium. If you look into the... it's the same picture which I have seen at Andy's speech already. It's Rick Silverman's meeting. I have still three of them if you are interested. That means if you see at museums there are probably less than 10 percent of the artists which are living which are around who are represented in the museums and in the museums it's always only a very very few pieces which are on the wall. That means we have to accept that museums are a touristic attraction today and that it's a very high competition between the museums and the curators who really has the best goodies, who has the best art pieces and holography is not yet recognized as premium art therefore it's still a big hurdle to come into a recognized well-known art museum and therefore we found art holography mostly in very small museums in museums which are working in a niche or in the province. That's a part of the story. It means would a artist which is seen as museum ripe is it a disadvantage if he works with holography? No, if it's a great artist it isn't but for curators it's a problem because curators to like if I like to have a huge range of art world and I never met a holograph who had a comparable art body a volume like it has a painter. That means if you have a holograph which has about 200 holograms what does it is in comparison with Picasso? We don't have the name Picasso, there are a lot of Picasso, Gerhard Richter, that means people, artists who really are much more productive in a certain sense and therefore it's easier to get the quality which you are looking for. That's a problem and I think we have too less really highly gifted holographers because most of them are coming with a technical approach. That means we have some even here between us are sitting some highly gifted holographers like the speaker before. Therefore the way into the technical museums and the science center was much easier for the for holography. Science and the technical museums always have the fame to be a bit boring, to be a bit dry but holography is fascinating no question about it and holography still is competitive with the other new modern virtual media which I know from my own experience putting up this year more exhibitions than in the last 10 years. That means it still is the same phenomena which works. Therefore the technical museums really liked to put holography up because it increased the attractivity from the museums. Starting with the problem of the creators in the art world as you know we had the holography really started without history, without past directly in the museum. I did the same thing that means I don't want to blame anybody for it. For me it was purely a PR idea. I thought if I have a museum for holography I get more PR than if I start with a holography gallery. That's a very simple reason and it worked. Therefore we got such an inflation from holography museums without content. That means at the beginning or in the mid 80s we have had hologram galleries museums all over. In London the Trocadero it started at first as light fantastic and nearly each bigger city there were huge holographic exhibitions and it was the decade in which I had about more than 250 exhibitions worldwide. Looking in the past of the medium I think one of the first who I know was Carl Friedrich Reuters, who worked together with Hans Bjelkaken and Carl Friedrich who is a very recognized and famous artist. The sculpture which you see here is in the piece Non-Violence. It's in front of the UN building in New York and it's a big metal sculpture and we produced it as a hologram as well. Carl Friedrich started 69 together with Hans Bjelkaken in the opera in Stockholm with a laser state set. Then they started with the first laser transmission holograms and later Carl Friedrich worked together with Dali in New York and with Peter Glaudius. They made this famous hologram from Dali in the multiplex technique. I think this was the beginning when the first time holography crossed the art world. Hans, I'm right? He's coming on the next page. That means my own start with a holography was completely by accident. I studied philosophy and literature to make sure that I never will end up with an ordinary profession. Therefore I was quite open and when I came the first time to New York I passed by the window from Cartier and you all know this fabulous story. It's a PR story. It's one of the best PR stories but it's really true with the arm from the lady who kept the diamond collier out of the window and the people really were standing in front of the window and they freaked out because nobody has seen at this time a hologram before. The same with me. I didn't use the technique. I didn't use the laser. I knew nothing about the medium. Then I went into the shop and asked what it is and where I can find things like this and they sent me to Holoconcepts Corporation of America with Bob Schinella and he founded the company as a spin-off from McDonald's Douglas. This is a diver scene which I put up together with Anna-Belle Schoenberg. She's studying to restore holograms which is a wonderful profession in my opinion and we both were very highly delighted that the hologram was in such a great condition. Yes, in such a great condition. It really has an unbelievable depth. It's about five six meter and there are five at least five diver coming in. There is one coming over here, one over here, the lady down here and the other one. That means it really is a tremendous space and it was done in 73. In 73 I bought this hologram from Holoconcepts and I started my career in holography selling the first display of holograms in 73, 74. One of our first customers, washing powders, yes and they all said that money at this time, tire riders, technical products and of course haunted house. Something with this little car closing through a dark space and there we had a huge hologram in and this was my very first steps into the field but I don't have to tell you anything. We needed mercury arc lamps to lead the holograms. It was a very dim image. It was not really stable. It finally was too early for a commercial business. I found out that it is much easier to make exhibitions and to show holograms to live from the fascination than to produce holograms for customers because the customer never were really happy with the holograms. There were always some things they disliked and therefore the audience, the broad audience, always was happy with holography and this holography shows. Therefore I shifted in the holography field and after Posi-Jackson founded the Museum in New York I had a chance to buy the stuff from Holocore. They just went bankrupt in 78 and I got all the stuff from the first light fantastic exhibitions in London and therefore I started with my own museum here. The pd at this time there was not enough material really because everybody who had a gallery or a museum had more or less the same stuff and therefore it was for a competition point difficult to differentiate yourself and therefore I really engaged myself very early in art holography and to collect and to show a single exhibitions from single artists and I started my activities in a little it was a huge carpenter shop with about 3,000 square feet in the suburb from Cologne and I at this time I came up each year with I set up each year two different artists like Ruben Nunez and Morit and Schweizer Hüdi Berkhoz you can read by yourself. These are the early Greks, the early great views in the art holography and together with the city of Puhlheim we sponsored the first European holography prize which was won the first time from Doug Tyler here you can see his work in the picture after this Patrick Boyd and Setsuko Ishii she also is in several museums represented. But I guess the really the high season from holography was over at the end of the 80s. Pussy Jackson left 86 then the other ones, a lot of the galleries went bankrupt and I closed my museum finally in 1994 just because it didn't work as a profit center anymore. I couldn't find any sponsors for the exhibitions because also exhibitions were sponsored by great companies like energy supplier like banks and things like this. Holography was out of a modern innovative focus and the future the medium of the future wasn't seen as a future anymore. Therefore the demand for the exhibitions went down and my business model like I had it for 20 years when it didn't work in the way how it was but I still have the collection and I still are doing exhibitions and as I mentioned before I have more in the priorities and demand for exhibitions today than ever before in the last eight years. Therefore I think holography is in a consolidation phases. It needs time and it will stand up again and if I see the amount of visitors which we had I had only with my own exhibitions more than three millions then it kept not really for decades it it's a bit of fashion and I think it has to come back with a more with a higher sustainability than it was in the 80s. In my opinion holography still has a future because we get such a lot of new innovative technologies. We have the color which we have heard about. We have the virtual combination with the digital holography and we have new holographic materials like some big companies like Bio for example or Dupont are still investing in holography and I think there is a new wave of interest and this really makes me believe that holography didn't end it has another start in front of the medium. And you shouldn't forget about all the new generation of holographic artists which are completely different from the first one which are who are mainly educated in Cologne. It's a Kunsthochschule for media art and Peter Schuster with us who is working with all of them. There are a lot of internet addresses which you can get from him or from me and you can look the new artists which are working with this medium. That finally comes to an end. I dedicated 35 years of my life to holography and I experience a lot of ups and downs. In my opinion holography is very cyclic that means it always went up and down and I believe we have another up in front of us. The fascination of the medium still works and he spoke about and I really can tell you each exhibition which is coming up I have had an exhibition in Croatia this summer the people are reacting the same way how they did in the 80s. I have for my the 2,500 holograms which I roughly bought in the favorite years in the 80s. I sold out in packages to different museums but it's really ridiculous that the Center for Kunst und Medientechnologie in Karlsruhe they got 500 holograms for me. They never showed one and I think the time will come that they will show at least the first dozen to get the people alive again. That's my talk.
The fact is that the biggest hurdle for curators to organize holography exhibitions is their lack of knowledge, their preconception or even their lack of confidence. Not that curators are too stupid to be interested in our preferred media, holography, but they need, like most of us, guidelines. The problem is that holography had from the beginning an ambivalent image presented on one hand as new recording technique and as such a fascinating media, and on the other hand a new form of art. The producers of holography have not been able over all the years, even through distinguished conferences or opinion leaders, to get holography rightly classified. Holography still is an opalescent visual media with a fuzzy image. I may say that I may even have contributed to the confusion by exhibiting holography as in well known art museums in Hanover or Nürnberg, including the Ludwig Museum in Cologne, but also in technical museums, e.g. the Deutsche Museum in Munich or the Museo de Sciencia in Barcelona. The commercial exhibitions I did, I made a living out of my holography exhibitions, in theme parks or commercial malls did not help the matter. However, as over 3 Million people visited these exhibitions, the media was discovered and commented.
10.5446/20946 (DOI)
So, I'm Dietmar Öhlmann, I'm an artist as we know, some of you know me already from my study here. I've done my study of art in Liverpool and Royal College of Art. And I've been founding Art Bridge and the Museum and Zinfir Dei. It's a company which is doing special product development for holography. Next page. I'm working together with my wife, Odil. She's been running, she's been founding and running the Museum of Holography in Washington, D.C. for a while. And yeah, after Marias and having several plays to, since 15 years, we are living in Germany and running holographic lab there. Next page. Here we have a picture of a collection, there's Markov, the ape of a Museum. The exhibition of holograms are previously made in holography or in Museum. It was more the holographic collection itself. We have used it for replica of objects or to show just a phenomena of light. It's in modern holography. Next slide. We are coming more and more into a kind of free use because the Museum wants to be not anymore just having some kind of old stuff laying around in their shelves and making their scientific research. The Museum wants to educate the public and want to communicate. So by being communicative, they want to be fun and educational, bring fascination information, appeal the sense of imagination and have a new tool for presentation. And holography is in a form, it will come up in the next few years, a perfect tool for presentation. I think Martin Richardson will show us well, there's something there. I work together with XYZ Imaging in Canada, Geola and in Littown. But we have as well Cibra and HoloPrint and they all together worked in digital holography and brought up a new technology which is available now about what HoloPrint does. The lecture after me will explain deeper into this new kind of material. Synthedie provides the interface between concept and ideas, printable files and final display. What does it mean? I hope this will be a bit more clear in this discussion. If you see behind this hologram, this is more how it's been used in the past in convention. You show a show, eye catcher, a point of sale and a fast effect. Can we have a next effect? But when now, for example, this 3D scan comes, this is, we have a project with Delay, the Schloss Neuschwanstein. It's a beautiful, you know, the Schloss Neuschwanstein. It's a beautiful Schloss with a lot of little towers. It looks a bit like this Walt Disney one. And we have all the three-dimensional scads which is done in three-dimensional from a DLR with a 3D scanner. And so the whole file, it's in one piece as a cloud of data. So how to make it possible for an exhibition because it's becoming in a Barberia kind of museum, you have to take as well all the illustration of the slides and you have to bring this left slide which is three-dimensional and the beautiful pictures together. They do not exist together. You have to bring them together to make some more use. Can we go on the next one? And then in a 3D visualization program, as you see, it becomes more and more an absolute perfect replica of an illustration of this Thron-Sahl. So by using not a straightforward hologram as I learned while I was studying it, now with the new technology we can make a virtual replica which looks more truth than reality itself. This is the way, this is the way we are working at the moment. So this is why we call it product development, using technology out of science. And we have requests from museums, for example, and we bring this both together. One of the beautiful projects is from Zahadid. You see the frog perspective on the right. Zahadid is a very famous architect which did as well a museum for Fenomenthaph, Fenomenthaph, Fenomenthah and Fenomenthah are museums of a chain of a new way of science museum where the phenomena of the nature is more touchable and more reachable. You are learning the technique of scientific reason in a lot of experiments and you can do the experiments yourself. Can we go to the next one? So the phenohologram, here this is, I wanted to read this text to you because it's not my text. Christoph Berner made this text. The phenohologram is like iconic work of art and we use it to highlight the entrance of phenos optical area. Therefore the hologram is positioned at a widely visible place. Here is the picture you see the opening. Although the technology behind the hologram is because of its complexity not described in total visitors get an impression of what amazing creative possibilities are opened by the technology. The hologram is used as a motivator. It makes the visitor more inquisitive on basic effects as diffraction in difference. I think I had some communication with some other holographers like Pearl which made lectures with children and you will see on the end of my talk this kind of new perception that we bring holography as well into the mind of children and it's the best way to bring it as well into the museum. Museum needs to be attractive for all generations and holography is a very interesting site and very interesting technique to come to optical phenomena, to learn more about the physical parameter of optics and you see that it's as well fun and you come and learn more about in the teaching. We had some experimentation made with schools classes in there and all teachers has been repeating after seeing this holographic exhibition. There was much opened the ears much more about what is the phenomena light, what is bending, what is diffraction, how is it reacting, what is the nature of light and this is quite clear. This was the reason Christoph Berner of the exhibition creator of this in Feno in his team and was very happy to use this medium as a constant place in the exhibition. Continue. Thank you. You see this is to have this Feno us in space, this building coming out very far. We had to make the digital construction of Zahadid. We used to work about three months to redesign it. You have when an architect design a building all measurements are important and the mistake if something does not fit you feel there's most of the building, the construction, making everything reconstruct and merging out the mistakes, the faults. But if you have a hologram you see all kinds of faults. For example you see trees floating a bit over the space and all these things what in 3D illustration is not important becomes visible in the hologram. So all splines has to be closed everything has to be adjusted and then the hologram worked and to have it without any distortion we have to render the camera perspective five times wider than the normal so you can have a distortion free camera movement. This is about the technique. Maybe further. Next page. In the Landers Museum they used the they made an exhibition of the bronze bike of the photon because bronze bike was a city of the Roli and we have a lot of development in camera making and so we had quite a history of in all the camera making and producing and I saw in the end a little image of the newspaper where our opening of our company was published. So I was wondering well I'm part of an old-fashioned Landers Museum so historical museum so I'm already a bit of the past and so they came later to me and explain me so what they wanted they wanted to see the show the past and the future and the thought was thinking the most interesting part is because Roli was used a lot for professional portrait photographer and he is my son he used the first this was the first test of the camera which which Geola was building and so he made this kind of 4D portraiture and he was exhibiting this one there. Can we show the next one? Here in the opening again it's a special attention. It's not my year. So they used it for visitors to try to discover the secret of phenomena and it's a joy to contact with the creative family. We are proud to have one of the first Sin4D with the original fresh artworks of the son of the inventors in our collection and signed by Agima, media collector, Auckland International who is the organizer of this exhibition in the Landers Museum. Again another view so we have already now two views of two independent person of curator how they use it and as you see each one uses it more or less as first of all as a phenomena to impress but as a second as well with an aim of a target to use this kind of phenomena now to go on to compare to select and bring it into what's teaching and get us well more interested to go back in the past and looking at all the cameras where all the original was because without the invention of photography where would holography be I mean good next page. Another one here is a picture of Gerhard Stief in the Explorer Museum is in Dingelsbühel in Frankfurt. It's a two big museum of stereoscopic collection. Gerhard Stief was a photographer for he made all the furniture for IKEA and when he made enough millions as a photograph it wasn't a time when photograph was able to become rich. Then he wanted to do something what is fun and this is as well again educate children not about showing them cameras but using them stereograms and how the sensation of stereogram is there is a lot of it's like a phenomena of 3D stereography and he has as well already a big collection of holograms by another negative way for me by the bankruptcy of a lot of holographic business we had. We had as well a lot of museums in Germany which all was not anymore able to survive in Tim Fried in Bamberg for example. Hollywood still exists on the internet but not anymore as a living nature or we had as well Cologne is quite active with other things. We wanted to have a lecture is it coming now but in Germany itself holographic collection is not so much anymore shown we have in other museum for example as well in Berlin it's just in München and Berlin jeweils in a science museum a small hologram shown and it's as well showed it just as an optical effect it's not shown as a demonstration of what you can do or a new way of communication as for example Feno tried can we go one too fast here's a fantastic application for digital archaeology is a computer company which is which makes archaeology from the past you see the wall the lines they actually this one when of of an old home and villages which they found and and there was reconstruction out of the funda fundament waters found the building itself this is this is a new way of of archaeology you you you reconstructing of from what you know how it would have looked like and this is all in a computer and they find find this major very very interesting to use in in the an archaeology museum by the way you you can have an animation because you have 40 by walking in your normal speed and this is important by looking at synchrograms this four-dimensional holograms we have as well one before you can choose a three-dimensional video by the movement of your body this is the effect as this is you can stay here you see all the fundament and all this original place as as what they found and by walking next to the hologram you're going further and further in the past and seeing what kind of buildings has been on this fundament and this is certainly a different view that this makes it visible you can show a video as well but what is a video a video is a machine it controls the machine the machine controls what i'm looking at it this is another form of perception i perceive we are normally when we going in wood and children playing they go there they touching something or sorry but they're not they they they're not going and just sitting there brave and watching a video here with holography as an as a kinetic hologram they can go and pass on and looking at this thing can be configured and this kind of animation natural trust park one really strong strong reaction was from natural trust the hearts did one project they using time to see to show the destruction of of of our nature because the heart is now in a natural trust park and we are facing that even in Germany the woods are starting to they generate more and more and for example they're they make on one place each time one shot and then every day in the same time one shot and we did a hologram of this but you see the difficulties of the hologram again because you have a sunny day you have you have the snow is pressing down everything so you don't have an orientation point and you have you have you have a cloudy day and what you have is this really you have a full animated hologram but you have all kind of interference and places because you have so much information over laying over each other that you don't have a clear sharp image but already in the nature park said they enjoyed it very much and use it in the in the for their museum in their exhibition because they said this is how late nature is it is not sharp there is always something happened it's a it's a smear of time and so they use it very constructive this was very impressive for me i enjoyed very much working with them can you yeah now going to the museum you all know here Madonna with gorilla gorillas in mtv award there is there you have this comics figures virtually floating in space and when you are googling or youtubeing for hologram you have all these amazing videos about holography what it promised to do float here you're taking your hand out and and now i show it to you here the hologram comes out and i saw Odile giving her part of the lecture virtually and speaking to you no this is just this is just all the camera tricks what is made but what is quite interesting this cartoon coming on the on the stage and they call it as well hologram can we go to the next one and we have as well a lot of concurrence from hologram for example there's sheer optics which has which use it's a remade of the peppers and ghost effect with the pyramid and but it's you can build this one very large and you put it in a fashion show and you have a figure floating in space and so i had the dutch museum they want a matematic exhibition exact with a holographic projection in space so they here we have this round place and einstein and another matter three other matter mathematician was discussing about this kind of mathematical phenomena and each one has another direction and then you have a door and you go and walk in the next direction in the next one so we had a wonderful project but then in the end she said i i'm not allowed to say that it's finished because we are still have to find a way to do it but but first of all for this exhibition time i had to say no first of all feasibility and it needs to be feasible second we have to think about if you have four ways this one works in the dark if you go now in a in here in the dark and you have people visitors going in this direction this direction this direction this you need to light up this place because this is then the safety manager of the museum say no you cannot have a place where for a car for dark so then you have always this kind of problem with exhibition then in in the museum like the the dutch museum has a lot of rules and regulations so we needed to find another way and so this this thing had to come again on the wall and this is not really satisfying for all of us but in the in the end this one is an ongoing project this is where we are we are finding a solution how we can make it still on in in in a in a bright place and this is known as well an ongoing project for museum application can we go here is one which is using this kind of holographic projection is in germany we have the company sax 3d you see this here in in in hildesheim the in cellar sorry the museum the schloß museum as an holographic daylight projection screen it's it's it's brighter than have to have frosted glass but you can even make it so that you can project 3d on it it's a so you have 3d projection on on it and in set time they're here as you see it's it's very interactive you can switch the information this is the advances what holoca, holographic screen for projection has compared to holograms where information is frozen into can we some more research about application of of museum is done by oliver bimba also so he's made in 2005 this kind of research for combining holograms with intercomputer animated graphic can we maybe so this is the illustration what how it should look in the future and this is all the experimentation you see dimer kreisler and brawn university you see uh ohia ohio university everybody was working in this project and it's it's used for archaeology or for navigation systems or as well to have a 3d dimensional scan in a brain or in a car can we then so then i'm going further that's if we are sin for d we are living we are using technical applications so we're making ourselves as well concurrent we are taking as well 3d television in our we're making solution for 3d displays using 3d monitors there we're facing again another problems because the content developed from hologram is not applicable for content for developing for for d display or lenticular each time you have to uh use a different camera system and recording as long it's digital it's easy if it's not digital then you have to think before you take the pictures that you really make your shot for all applications so your customer does not need to make another uh another labor play the day or bring again his sample so it's better to use three four camera system by the recording this is what we're doing we propose always to our the museum uh we don't do just in hologram if you do needs as well mass production so do as well lenticular or do and uh we're doing as well some kind of movie so you have as well something for mtv youtube for your advertisement yeah and uh what i wanted to say is as well in germany we have as well uh we're talking about lenticular we are we have as well a research we it's some uh it's 500 000 or the new gene fund some millions in lenticular of micro lens by base so what is possible to use already to print in micro lens for the 4d it's it's it's it's it's still very small it's not uh it's it's it's it's still in development but there is a kind of development in a new technology a new possibility to use uh to not to use just digital hologram as a as a presentation but as well using uh lenticular things which has is much easier because you can use it in offset print for fun yeah question answers here i wanted to bring as well a bit question from outside of holography uh for example i asked chris chris kinnard reductor from inside media uh he is selling all kind of 3d project products uh in america why holography does not appear in any of your newsletter and 3d conferences and he said very easy there's no market big enough to talk about uh if you are a company on your dealing with a museum think about it's just three four days to to make an idea so if you don't have a job next to you for example it's very hard to make you living with it you because it's it's time consuming it takes one some projects to talk to a museum to bring to bring it out uh or bring a new vision out is really very interesting very artistic but it takes as well all like all other artworks more than one year in its whole conversation rene de la barre he's uh works for fraunhoof institute in berlin and he said uh i was as well asking him who is buying you a 3d kiosk uh it's an kiosk system where you stay in front of it there's an eye tracking system you turn uh 3d objects just in space you can have his share his turns and rotates in all direction is fully inductive uh this is not bought by museum maybe he said maybe it's too expensive because uh it's just the industry which is using it for demonstration but not in uh in uh in a museum and uh charvier was as well uh dr james hamster from a for international conference how we can evaluate the development of 3d products more than a technical problem the evolution of the market is depending of social cultural constraints that's why i said what in the beginning again we're coming to this point we need to educate not just the children we need as well to this educate the designer how to use it this new meter okay i'm very i'm into the end i'm jesson roy distribution manager from spatial view there's no investment for 3d education this is why the market does not pick up so fast so 3d as well holography and uh for in a museum holography is just very very view to see can we see maybe in the video in the beginning and conclusion the can i bring me back to the conclusion this is i would like to bring the conclusion of forward presently holography is used by museum for fascination it operates on the public the fascination is understood as a driver for visual curiosity mind-stucking opening a new technology technical technical possibilities source of inspiration and finally an attractive educational tool digital holography is interesting for its capacity to work out the creative content of images while using combining the many digital formats available in document of mosey museum digital holography can be easily combined with other 3d product on the market and museums creators like to work with professionals like sinfodi to solve the dilemma between technical feasibilities and attractive better go the pedagogical presentation can i just go uh just for relaxation uh was in is this possible uh i wanted just to show it's it's as well uh this he he made this taken himself okay this is bigger no no so so this is the you see this is how the camera moves and he records and while the camera is moving next to it you have 3d and 4d this is an actor who is making the opening this is agima he is a creator a curator of this exhibition and here is the reaction uh michael grozman who has been in nita born in france organizing this big big happening and you see how the children enjoy seeing children so so filibar and his work has been worldwide already exhibited he sells more holograms than me so this is just uh any question thank you thank you
In these past 30 years, Holography has been mainly used in museum for exhibitions itself. Now, in our digital world, Space – time coordinates can be printed as digital Holograms, giving a new educational tool to museum. Digital 4–dimensional files are available as DICOM in medicine, CAD in Engineering and Architecture, and as 3–D scan format in Geography and digital Archaeology. Our experience with museums shows the interest of digital holography especially for its visual, educational and attractive capacity of presentation. Presentations in museums required from curators almost a scientific approached of what is possible with what they dreams to use for their exhibitions. Syn4D™ is offering a full service of imaging creation and display of synthetic 4dimensional information called Synfograms. The main advantage to use digital holograms lay in the capacity to create, not just reproduce content in time and space, showing what otherwise would not be possible to explain and visualised. The price, the additional lightning, and limited mass production still make the media a luxury. XYZ imaging and Geola uab Zebra or HoloPrint made this wonderful technology available on the market. We provide the interface between the conceptual ideas, printable file, and final display which communicate the museums need, and the technology. According to the museums, we have been working on different applications and presentations types, which will be illustrated in this lecture.
10.5446/20948 (DOI)
Felly, dy gael chi am iddy mer plyd arnynt Wahant til arベtywr coordinates o han mul yn bryd oedau spyllairexclu neu darbyn ddweud ar wel particularu amlwgynnig tu fy下面on i eu deallog freshly byw fan gwir yma ac wedyn o Signalgau natdo wnaethau meditation Ac folkwyr i b kulion neu odfoden nhw i'cheth Corff micro Os gynnw'r Lime oedd o'rmyysgylcheddau sefyd жел disease Fe gweldio mai rhaid Tyне... I will talk about taking chances at looking at things I don't know how to do that están cam gyda'n b Bergwysfer iawn neu yetai'n gymfiveOOD, fod rhan o'r expectaur sy'n fwy Sport humility, bobl ynżywgeth felly, dwi'n pause rhywun i eisiau gweld plant tu ystafellol. Rwy'n nodi am dda chi'n Goingarn yn Llywodraeth, ac mae gynnugel yn y cyhotega m factunfer ychydig Beth ydy Wy围 i temple gyda L Asianum. study A gyntaf hwyl gennymohon arall. Diolch ar gwrsodna datblygu an diagnose ar nameshau hobbyfa yng Nglais. Ar 어 nut-nığf nifer. Un wnghw Be Ar Y Nepw ag eni — mae d 상황ol inspired dd Scrollary er mwyn pwysig. Dw i'n twid i roi bod chi'n buddiw? Fe yw adr wast adryrd gyfôl? A chymaidwyd yn gweddobeithio ffas jeden gylawni, fe jabERSy iddi ams ycream cwlaiam indicator colo, ac annios yn iawn y gallwn iechydig συn cynghau opt medростer. Rydyn ni'n ddiddiePal ychymru bod hyllaeth iawn, ac rydyn ni'n fiysti. It datlo refineiddo pa'r sygu gymuned. Dw i fod yn tref iawn. Be drob daug y printer a'r unityst roedd o'r cyllideb ar y plwy oio eich dd surprise. Mesud hy George recessionllwch fy yw'r edges edrych. Ynrhyw ychwaneg y blynydd yn eisiau gwyparol pynnwys, dyna d��서 iddo a dyna y pickynau arώ i fynd y знак, mae'n mergy cor gl accessed dŷ wrth darun, a chw���fch iddoroc wedi debyg nhw Shop. Myth i ddau'r cyfosio o ymgyrchas hefyd yw €Info'wy. Ac wnaeth gwybodaeth yw de rehearsal ac yn tanfodonow dyddwch ar y w let Dawn Argyll elderly?akatiau yn egulol L із ryng characteristics Mae'r ddweud o'r cwriwn. Gwysigwch, yna, ydych chi'n gwybod? Ac mae'n gwybod ychydig yn y cyfnodd. Mae'n gwybod yn ysgol, ac mae'n gwybod yn ysgol, mae'n gwybod yn ysgol, ac mae'n gwybod yn ysgol i'r objeb. Mae'n gwybod yn ysgol i'r lluniau yn ysgol i'r rhan. Mae'n fwyadwy eich gallu gorff� gwybod ry 표'r y tro dr δεν, ac mae'n rhan o'r fiar alg자an yma, ac mae'n gwybod mewn cofwn rhai rymet swwr, awn na ydych chi'n ei ddechrau gan adderaf ystod, ac maen wedi達g Solar axe Casbynysu, a'l basildd y môl yn yr greu'r aww, mae'n traithio growddau, ac mae y gweithnodd hynnyoffent pelwyr olderneau gan fywod. Yr bra�� o ph другой, Gaespech chi사를 llawer, bo'r gweithio sy'n fundwrach hynny oherwydd sefydliadau a Newgerswp ar gyfer catal penciloo gearwyr ac ei gweld i'r cyf Zoesffon? Rydym yn meddwl yn enw ydw i. Rydw i'n meddwl yn naddw i! Yn yWorldwyr wedyn yw theatru mae gen i wedi Signod a'r G poetryn gyd пригffreadai. ymweld i'r wneud y gallwn gyda'r gyllidau, ymweld i'r bwysig, ymweld i'r bwysig, ymweld i'r objecau sy'n ei wneud. O'r fradau, o'r bwysig. Felly, y gallwn i'n mynd i'n mynd i'n gweithio'r eich ysgol, ydych chi'n gofio yn ysgol. Efallai'n gweithio'n gweithio'r eich eich eisgol, ond mae'n gweithio'n gweithio. Mae ni'n console gyda'r cyfath RHONBOIDS nid yw怎麼 ddim.non mynd i gael h Costa, that yan un, yn enwedig, sy'n gyffredinolposition ni fydeg i'r cyflor, fel IELíild Gaelog, ond mae newid i'n Felly, ymddorol yma yn yw'r hyn, rwy'n cael eu ddweud o'r ddweud o'r ddweud. Felly, ymddorol yma, rwy'n cael eu ddweud, ac mae'n rhaid i'ch bod y byddai i'ch bod y peth yn ymddorol. Mae'n gwybod i chi'n ddweud y peth arall, oherwydd mae'n ddweud eich proses yn ymddorol. Ond mae'n gweithio'n gweithio'n gweithio'n gweithio. Mae'n gweithio'n gweithio. Mae'n gweithio'n gweithio'n gweithio. Yn rhaid i wneud yn grannu excess yw mi-diwr. Mae'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio'n gweithio roi i'n y gymry inception. Mae'n gweithio'n gweithio'n gweithio'n gweithio golffod. Felly, dylai'r ffeirngau ar y dweud os hynny, mae mwy gymryd bawb. Mae'n weithio'n myndigration, mae bapai os unboxio am y bus ar casa fel ei d Villa bendsrboo. Voysau y salu a Cardiff verify toddi, byddai'u gymryd amy tilei sil lanwt edu. It was interesting this morning that a lot of you applauded the holograms, you applauded inanimate objects that aren't here. What's that about? That's really interesting. If you're doing it, and you've probably seen it all before, what are the other people out there going to do? Myso ydym wath. Gweithio o был f оставio celf �yr. mae'r mflog ar borders sydd dangos fourws ychydig oal o'n unig ar unexpecteda, felly robo'r ffeafol아니. Y problem y gallwch arno, phobl Bankiar Niagara a bobl a'r cefnee ac yn synnu am boblvesu consumesen. Mae ei… maenir profiad greg surfacedyn wedi bod wedyn gallu'n meddwladau cyfisi id insulin, werdeis. Mae cwrtiau ry'vewn suro faint<|pl|><|transcribe|> Jadaw mwych. Birded.電w ku wyoch isha Arias. UAI dyma wilheni weld na gweithun gym Og yr varbys. Gdycoming. Now, what do we do by protecting things that are valuable? Well, you can replace them, you can photograph them, we inherently understand how photographs work, you can make a cast or reproduction of a facsimile in 3D, that you place on the space and everybody looks at it and sometimes you can touch them. You can protect them by placing them behind glass which is another thing that Hannah has mentioned, hellaersers, ond ondi hella yn ôl, yllyni bodieswyd iawn ac oedd yn chi hwyl jaff o bob wir lea'n ddefnyddio rksi. Dw i i ddigitalol mewn pryd, ond oedd eu syn HQ, yn ddarparu ooroeth o hyfu. Fi wnaeth efo eu peirio I engaging y dépane, o po你就, ryw rwy'n vibr y Gymraeg, Os mor ofyn yn wneud y gwir commanderfyn gwir i ddigit THAT, o hynny i weithio rofelly. Pa yn digit mwy o'r heddiw heizawn, i'd yn ei wneud ar y dd iciw, Cyngor ddowsheidol chorog ac oedd yn yride nodd, a'r oedd yn cael ei wneud. Ond ydych chi'n gweithio'r gwahanol ac yn ystod, y dyfodol, y gallwn yn gweithio'r gwahanol yn ystod, yn ystod, yn ystod. Rwy'n cael ei wneud yn ystod, mae'r proses sy'n fwyaf yng Nghymru, yng Nghymru, yn ystod, mae'r bwysig yn ystod, mae'r bwysig yn ystod, mae'r bwysig yn ystod, ac mae'n ddweud, mae'n ddweud, a'r bwysig yn ystod, mae'n ddweud dros yma a'u cyflwynt a'u cyflwynt, ac mae'r bwysig yn ei gweithio ychydig o'r bobl doedd y bobl wedi'i cael ei gweithio. Mae'n ddefnyddio i'r bobl a'u ddweud yn ddweud i'r bwysig yn ystod. Mae'n ddweud, mae'n ddweud, mae'n ddweud yn ystod. Mae'n ddweud i'r ddweud mae'n ddweud, Cyhoedd, 400 yphrolydog met Llyfr False lle'n gwybod breathe o pob wneud o fe sinunat, gallais ceisio at gyfo'r holl gymhefyr. Mae hyn oавr yn хau. Ryddении Martin wedi Despite me nesmelydd yew Quite i ni lanwedduaeth i'w modd, yn sefyllustad inni Mae'r範nogau, i yw Ondol, sy'n menlywch y cyfan. Mae Mrs Fowdy, a'r cyfan o dod yn elderly i'mוnau, sy'n gyran o bwyf ac yn h Ouchingol yn bach y da fans. Mae'n edrygan i'r diwyllcu hyrwedigau ac yn taellinedd samt ew Tibetanist. tooro at dydd auxion, da, a'n dod eu syrlus o tair hoyd, ac yw un miliwn. Mae'n trafodol yn dyn nhw yg lordiwn yn ôl beth mae'n cafyn palau. Rhire rŵan-dihb ждach gan €1.5 series y maen nhw byddynau unig, ond mae brydbarth gymflingi히 sydd nu ei늘io manoch � Herrgak. Mae'r dyl지는 yn seith messy ryw, a drwyf�ěr ar reliable. yn iddigith i chi'n ystod hyn eich cyflwyff ni nhw yn cael tejw ddweud harai ei f advertisement yn hefyd. R twists Pickle has chopped him down many times, re 않고f depth to a yr adegonwch mae'r ysgol f vault trwrs. Mae'r adegonwch m разв centres yn ydynt'iuno breförbeid resurrection. Get hoped a doul byddwch yn folaed at ar y dychwanes, Jesbyd, wrth gesto presence, mae'n systematic a notorious i d subscribedy o'r gwneud o ceger help independent. Ond mae'r dweud hyn instagramach ond, mae'n schud squeezed rhan o rac iddynt allan сл niece a dweud ei defnyddio, o cyfciad lu mae genno'n dd emotions a newidi жerdd gweithio at yw, Python a run. Mae'n rhagliadeth extensions hayw lyugh cyfan, a o heddiw am gyfíu bwysig ar loans, yna oaky Baum Daniels Cymreiddi, yr sustainability её sy'naredd ahet yr hynain. Felly byddwn i wedi gotwch ar gyfeirio hastaf o'r cyfan Ysg ramenadolon, On ni wedi ingeniant byddwn yn ei gweld, pursuit addeg, ach hynny'r Jaeffence Yn yma, Rydych yn i aw73, ond несколько lens an jusqu y Wir i reliable y maen nhw i. Raddwch chi,wy'r ffasel편iaith gym Letsan D arkadaşlar. Mae horenau anPsenedair? Felly mae'r pelyοi ei croswm yn f ergonygmarlu llioggofian dot! Efallai, mae'r hanesadeugiau yn sene de unioniaid y selectedgachau er mwr materiad gyda Enna. Efallai, mae'r hanesade winds hindree oedd fforniad a G worsio o fan- (#4-5仙-4 Israel ar gymuned ag yn y deуки plan yn ni'r ysgol. Mae wneud wythig'r cwestiynau hefyd, yn gallu peiky phiol, y ryower y bobl開始 y mae'r Precis A'r Objemod. Rydy'rxesi'r mewn gwleinio'r Ghuaith Siwyddi mae peiky'r Hyg addig n不管 ar y Menai ellan. Mae eich fueарu s 중 dda i roi'n llawer peiky fyawn a'r hygar yr holl Ddiweddau ar folk yma hwn, sy'n nhw'n cael ei gweld mwylo eu fan iddo o chair. Yn blond i ddwy mod i, algunos gennym dda. Er gwaheyr! refugeae uchydig hynny er sonedol o'ch, ond darks 有ogethcath yn ynions eraleid. Ac rydych wedi'u dalon digwydd i gydbarthol. Soangs pa'r draad.hips arys yn gymymru. AS Yell Arthwy背on Walln relieved bydd Cyddi Cym studio White yn sicrhau amCIr y bodあthwy'r bandvhel yn gwir o'r ent. Fodd a digwyddulio cydasio b teachingsicularly inni yn ei peth. Umfalu effwelf wrth月on. Mae'n mynd folf Mod Aynywaufenod jefn công Gclipwlad. Dw i' intellectuallyゲtl. I will never forget her beauty and how frustrating it was to choose an angle of view to reconstruct the image through a restricted window and despite the joy of success it was so frustrating to reconstruct the image with grainy laser light when the white light illumination at the real object had enlightened my dreams and my days and nights. I you know this is 1975. We've moved on as you've seen today we've got full colour it's it's viable. I'm not sure what I'm thinking at the time it wasn't but what actually what I really wanted to highlight was the fact that it enlightened his dreams. The fact that somebody can dream about something when they're looking at it is very very important and just reproducing the object isn't enough. You have to reproduce the dreams and soul and the content and I don't know how you do that it's very difficult. Perhaps one solution is to reproduce things that don't exist. The invisible this is hot air by Margaret Brennan which she did just at the road at Loughborough University in the engineering department. It's a holographic hand shaped black hole and it's interesting because it just shouldn't be there. People who make holograms know that things aren't supposed to move if they do it doesn't record and I reckon there are a lot of wasted holograms in bins all around the world where this happened. But it took an artist to come and go yes but what what happens if we do do this what does it look like and so she made this absence of her hand. But actually it's pretty present you know it's a way of telling a story about the space that she's recorded. So we can use holography to display objects but we can also use it poetically to try and display the essence of something or tell a story maybe. This is a piece by Rick Silverman and it's the second hologram that I just want to briefly touch on. Again it's a three dimensional hologram of the shadow of an object. In this case I'm sure most of you have seen this. For those that haven't there's a shelf on the front of the hologram on which the broken stem of the wine glass originally used to make the hologram is placed. So the hologram completes the glass. It's a relatively small hologram but conceptually like Margaret's piece it's absolutely enormous. It tells a story it completes a process it engages people and it's very very simple. Doesn't have to be fancy it just has to work properly and that's what this piece does. It's an additional 24 it's now in collections all around the world galleries in the zoom and Matias is speaking next. Did some work with Rick after this and the German version with the glass. This is the original there are other versions around which are as interesting. So we can complete these things. So I'll show you this because it's not holography it's about theatre and the sense of reflection. There's something very interesting about the reflective nature of holograms. We like shine things as a culture. They attract us we look into them and that sense of reflection is very interesting because it can produce a depth that isn't. The reason I want to mention this is that Terry Shade is a painter and photographer had this exhibition recently at the Bollington Gallery. Although that photograph is a bit misleading he lit the paintings very much like holograms. There was a light in front of every painting and the light stopped at the edge of the painting. So he went into dark gallery and you saw luminous rectangles to the point that people thought that these were light boxes on the wall that they were emitting light and sounds. And you can just see on that one on the far right there's a purple line at the edge that's a spill of the light from it from one of the lights illuminating the piece. And it's that piece that I just want to mention. It looks like this. So it's acrylic paint with photographs and then what he does is he pours a transparent resin over the surface of the whole piece of work. So it's not behind glass. It's not trained. It's not protected with a piece of glass. The resin becomes an inherent part of the piece. And by doing that it instills a process of visual perception of causes that appear to see death. When you look at that exhibition, I was really surprised when you look at that exhibition, it looked like holography. You stood in front of these flat things that were flat, absolutely checked with it, they're flat, but there was space in them. That space there, that colour space in the middle, had an incredible amount of perceived death which wasn't there. So maybe we don't have to use holography. Great if we can, but perhaps there's another way of using the theatre of lighting and the way we use reflections to cause us to engage us and to draw us into something. You can see here the reflective nature of the piece. That's one of the pieces lying on the table with the reflection of Terry on it. The reason I wanted to mention it is that because it has an incredible similarity to a lot of work in holography, there's a piece by John Kaufman called Rake. It's a selection of opaque objects on a multi-code surface. But there's an incredible amount of depth. Not necessarily physical depth, but there's an optical depth well beyond what's really there. So we can enhance these displays simply by using them very carefully. We are revisiting the 80s. Holograms for the first 60 years has been organised by Jonathan Ross and curates us from Bambry and the Oxford Museum. It's been phenomenally successful. I went to have a look at it at Bambry and followed a bunch of people around. It was just like being in the 80s again. In the 80s there were a huge number, a huge number of very successful mega-holography shows all over the world, which caused crowds and crowds of people and lots of publicity. He organised quite a lot of them so he could tell you the details. People were very, very excited. People were quiet. Now we're back where we were then. But this time we have a very sophisticated audience that are looking at them. I followed a bunch of hoodies around this exhibition and they were really excited by it. It's so cool. It's so uncool not to be excited by this sort of stuff. Their expectations are very high. They have their games consoles, they have the internet, they understand about 3D modelling inherently because it's in all of their games. They expect the world to run quickly, they're a YouTube video clip generation, they expect visual stimulation, which is instant and complete. But what is surprising is that they do still gasp. Seeing a hoody gasp at a hologram is really interesting because they so don't want to. It's so uncool. But clearly they're very impressed. So I think we're in a situation where we can capitalise on this. The fact that this is happening is interesting. The fact that the exhibition has been incredibly successful and I think that it broke record to what bit. It's about four or five times the amount of credit it was usual. What's interesting is that the exhibition is going to cultural institutions and it's free. It's not a commercial project. People can go in and have a look. It means that a lot of people are getting to see this and they wouldn't normally see that. So there are pieces like this by Caroline Palm of Buda. That's what we might expect from museum holography. Taking the object, recording it. Beautiful, beautiful object. Beautiful, beautiful hologram. Putting it in a display environment and just going in and overling it. That's wonderful. We're protecting the piece. But the fact that people are so excited now in Britain and the fact that the exhibition is going to tour next year as well as now, means that there is a sustainable enthusiasm. So now, I think, what we have to do is, I say we, I think for you, some of the people, has to take a few chances to take some of the things that you've seen here and implement them in a way that allows you to go out on a limb, to place some things in an exhibition context, which questions, which shows where things used to be, which helps to tell a story, which allows us a feast of being able to look at things. We still have to touch with our eyes. So we're sort of in a position now to be able to do that with a huge amount of interest and enthusiasm. I would, some of the work that I do with the International Holography Fund means that we get a lot of applications for funding. It has a very small budget, a tiny budget really. But a lot of things come in that need funding, creative projects using holography. Almost all of them don't get funded, a tiny number get funded. But I see these things come in, there are lots of them, all over the world, there are lots of people wanting to do things that actually they haven't been able to do because of financing. But also a lot of them are educational. People want to tell people about holography. And I think we've got to a point now where we can do it now. The technology is in place, the enthusiasm is in place. Heek and Ironman climate isn't necessarily helping, but there are always sources where you can inject funding to push these things a little bit further. And I think as an industry we have been putting out quite contradictory signals. Something that Matthias mentioned in his abstract for his paper. We show art, commercial, commission, and holography. People are confused, part of nature. But within all of that there are some phenomenal things that can happen. And you have the opportunity to actually do that. So, we have to look at these things. And I think that we can do it. Can we see? APPLAUSE Any questions? Oh good. Thanks Andy. RAC
Much of our understanding of the world comes from looking at the things which surround us. Hologaphy is the first technique, since the invention of linear perspective during the Renaissance, to offer a fundamentally different method of recording and displaying space and the objects within it. If holography reproduces the light which originally came from an object, what is it that we see when we look at the hologram? Does this ‘possible illusion’ have a place in museum culture? This paper explores key historical milestones in cultural holographic imaging, the paradox of looking at, and interpreting, objects which are not actually there and the creative potential, explored by artists, using objects or the space where they once were.
10.5446/21257 (DOI)
Oh Very good very good very good, so one one person was listening and he won a kayak trip Let's try it again Okay, once again if I raise one arm that's the speaker is coming to an end and so he has Time and opportunity to finish a sentence. Please clap with just one finger try it again Very good What is the sound of one finger clapping? And if I do this Then you go for an attic great great great great Now I have optimized my jokes so we have to spend less time on them It only takes one German to change a light bulb. We are efficient not funny our first speaker Larry Hastings My life as a meme give him a big hand Okay, this has nothing to do with Python This is one of my other hobbies this is called speedrunning speedrunning is where you play a video game usually an older classic video game And the core goal is to get from beginning to the very end of the game as fast as possible And it almost doesn't matter what you do as long as you're getting the game done If the game has built-in cheats, you're not allowed to use those But if the game has bugs or glitches you can absolutely exploit those so here. This is a speedrunner named you cheat She is playing ape escape for the original sunny PlayStation and she's using an infinite jump glitch to just fly over the entire level and go straight to the exit speedrunning really hit its stride with the invention of twitch and live streaming and so She's got 50 people watching her play the same video game over and over and over for like eight hours a day Speedrunning is really the domain of the young it requires a lot of time devotion in order to get good enough to be interesting to watch And so these people they just play the video game and when they finish they start over again They just do it over and over and over hours a day a day is at a time So there's now a yearly speedrunning marathon is done twice a year The first one is AGQ and that's in January AGQ 2016 first week of January this year That stands for awesome games done quick. It's a week long 24-hour day marathon Streaming people playing video games for charity and they're all very good and it's just mesmerizing to watch how they can abuse these games and It's it's it gets over a hundred thousand board teenagers sitting and watching the stream as it goes by now Here is a screenshot of AGQ from this year This is Chris LBC playing Spyro one and you can see there's a camera point of Chris while he's playing it There's the main screen and there's also the twitch chat which is going by it's just a text chat thing with a lot of motor cons and things Now this year I decided to go to AGQ even though I don't actually speedrun. I'm no good at it And they have it takes them a lot of time to switch between games because they need to switch consoles and all this stuff And so they have a camera that they just point at the audience And so there's a hundred thousand people sitting there watching this camera That's pointed at people and a lot of time there was nobody sitting there And so I was like well somebody should sit there So I just went up and sat in the front and I would start playing my video game unit You can see I'm right there on the stream right now It's it's this it's called a Pandora. It's for Old-school gaming So I kept doing this sitting in front of a hundred thousand board teenagers and something strange started happening So someone came out from the the show and they they said you know, they're talking about you on the stream I said sure that's fine. And then somebody else came on said could you wave to the camera? They'd really like that? Okay? And the guy came out and said people are donating in your name. Where would you like those donations to go? And finally they said what are you playing I was like well it's this so it turned out The hundred thousand board teenagers had given me a nickname. I was now DS dad They thought this was a DS which is not and I had great hair so I looked like a dad which I'm not So it was completely inaccurate, but it was a hashtag on Twitter if you search for DS dad You'll find all these people talking about me. They were posting these love letters They're drawing pictures of me That one's my favorite the sort of low-tech I Don't know if you've ever had a hundred thousand board teenagers talking about how much they love you, but it's a really strange experience. I Had people stopping me in the hall for pictures one guy asked me to sign his Nintendo 64 controller He was very he was more excited about me than the world champion at Mario 64 Initially I thought this is kind of annoying, but honestly everyone was being pretty respectful and it was all for charity and so it's like Okay, that's fine. So two weeks ago was sgdq summer games done quick Held in Minneapolis, and I went to that This is the Spyro 3 any percent race This was Orsa and wed C running and you can barely sort of make me out, but I got to sit on the couch and do color commentary And during the race, of course, there's the twitch chat scrolling by and they're like oh, yes dad. He's on the couch So I don't know if you'll want to watch that there's a link to you can watch the stream go by it's like nine hours into a twitch recording Speedrunning is really fun to watch It's even more fun to do if you have a lot of time to devote for it, and maybe I'll go to AGD Q in 2017 and they can all love DS dad again Thank you Wonderful stories. Thank you so much that remembers me When I was even younger I was giving seminars and it was about sending emails and So at the beginning of the seminar I asked the guys watch is your computer knowledge and One guy told me I know Zelda. I played it totally that was his computer knowledge Very good now we have as next bigger Xavier Domingo about Python Xp Exclamation experience. Ah cool. Pardon experience. Cario Domingo. Give him a big hand Okay, so basically Okay, who I am Basically, I have done everything related to electronics, but I'm not doing that for my work job or day-to-day work Have done a lot of side projects. I have done a lot of embedded code and stuff like that the most Thingy a good And I have no images in the presentation. I have no idea how to make a presentation So don't expect too much I wanted to basically share what I have learned on the I have been coming This is my third year in a row since I discovered you the Python Lightning talks for me are like the very best because you actually get to learn a lot of things about Python that are Like known to everyone but you So I started on the University I I had like my well I kind of read a lot of air of seas and stuff like that and it started to backport an engine module for I don't remember what and that was like super nice then I went to Phone I started coding a Python we I took we took a more PV which is an open source project and basically 40 to make a bird Have done a lot of embedded systems there all join either. I learned how to basically link what you do in C with Python It's really hard To use it because it has like a lot of things going on at the same time then I tried to install open stack that was like Could we need these that was actually easier I didn't know how to use it, but I could install it and Then I started with a spark and stuff like that It was actually well, of course this was a long three years. So I could actually at the end. I was better at Python Then I started on Music I did a lot of C development. I didn't use Python for a while But that was like a superb interaction to a sink. I know if anyone is lost in a sink I well, this is not a good thing to say But you can start by C you coding C and then you see how to back for things to Python And that actually gave me a lot of background on how not to access files or sockets and now I'm Jean it's everything Python everywhere and if it's not I put Python there and We're using I didn't know I didn't want to make a enormous list. So I just put that everything Okay, so the first thing I learned when I started with open source in 2009 was that Every software has a bug waiting for me always. I always find something that is like oh you found a bug or It's not documented or this is a feature so anyway, I had to keep Getting into like first you go into Google then you go into the mailing lists and then you start reading the code so one Really really good advice I got was if you don't if you find a bug or something just file about and someone will actually have a look on it probably and You will learn a lot of that you can propose your use case because usually is that you want to do a super complex use case that no one would think of doing and It can be a feature in the in the project And also then after like a while of like three years of just filling bugs. I could actually make one patch So and it took me one hour. It takes a lot of time like if you're looking for something else takes a lot of time and You learn a lot from the prayers you contribute to just by reading the documentation Trying to learn how to use it all the things so All those things matter remember to basically Have a minimum viable product else you will just don't want to continue with your site project and Don't use a sink. I am threats because that's not a sink. That's just threats Unless you your library doesn't support I did like how I usually do things no functions then I passed dividing everything into functions and modules and packages Try to use flake from the very beginning Yeah fixtures they are fantastic use them and Remember to have same defaults. It's horrible when you clone a break from github and you cannot run it because there is something missing and Yeah, and that's everything Thank you very much excellent So Maybe stay with me one moment. Sorry. Maybe you wonder what I gave to the nice gentleman It's a voucher for a kayak trip which every speaker is also entitled to Do you want? Yeah Larry Kayak trip tonight It's at all seven o'clock tonight Okay, so every speaker is entitled you can set up your laptop every speaker is entitled to Get one of those but he has to be present at What I Shot the wrong side. Yeah, because I have to read what's on it. So I'm playing blank check. Yeah blank check You can draw blank on those things. I don't know the word Draw blank what it was ever anyway the speakers are entitled to one kayak trip it will start at 7 o'clock 1900 military time at the reception leave your electronics at home and I have something like 9 registered lightning talks Larry's kicked out. So that's 8 one I gave away So something around 10 more vouchers are with me And there'll be riddles between the talks where you can win a voucher for a kayak trip our next speaker Danielle about Python adventures in Amoeba give him a big hand So If you've been to any Python or Django conferences in the last few years, you'll have heard me talking about a Plans plans to initiate new pycons in African countries And I'm very pleased because it has turned into a reality and this January We went back to Namibia for the second time for an international Python conference Pycon Namibia 2016 So there's a Namibia With its population of just over 2 million people. It's the third least densely Populated country in the world. It's very easy to get you just go towards South Africa and turn right before you get there So our venue was the University of Namibia in Windhoek the the capital we had 180 18 attendees half of whom were women with visitors with attendees from South Africa Zimbabwe Zambia Nigeria in all in Africa the UK Netherlands Germany Canada USA in Brazil so from all over the world 63 of our attendees were Namibian students including a number of high school pupils and 32 Django girls we had a four-day program first of introductory Talks and workshops including the Django girls to help people get started two days of talks in two tracks and then some more advanced workshops there were some challenges in setting this up, of course because the economy in that whole region is struggling Has been in a difficult situation for some time so trying to budget for a conference where for example the price of a very good meal is Is represents maybe half of somebody's monthly disposable income? So if you imagine the difficulties in trying to set prices for tickets We in fact were able to ensure that all the Namibian students who came came to the conference without needing to pay anything and We had a lot of help from Our partners in the University of Namibia Cardiff University in the UK the Django Society UK But here I especially want to mention the Python Software Foundation. We're hugely grateful for the financial support that they gave a Very decent amount of money to support this and made a lot of things possible So the PSF really is helping make a difference in the world. So thank you to the PSF for that We had sponsors from Europe as well from South Africa and Really happily this year for us actually local Namibian sponsors, which really means something about the involvement of local business in this We took a pre-configured Pi lab of 50 Raspberry Pis Funded by the PSF and Cardiff University. So it's a bit difficult to find the right equipment sometimes Had some interesting conversations at airports, you know, what's in this computer sir? It's 50 computers for a conference. Haha. So no, no really it's 50 computers for a conference And here's one of the workshops on the first day as you can see a couple of the school kids there being helped by one of the Students from the University of Namibia the Django girls again one of the school girls the Django girls workshop there spawned further Django girls workshops Elsewhere It wasn't just Python. We have local developers of PHP and Java people who just wanted to be involved in open source and and what we were doing came along and stayed for the conference to Find about Python and just be involved in We made a big splash in Namibia we were on the newspapers television and radio here Jessica from Namibia and Vincent from Cardiff being interviewed on the radio. I was on Namibian television as you can see it says I am in fact Python software This was our program of talks you can see it's a pretty packed Program into tracks with people from all over the world speaking. So that was it was a proper Python this is the most diverse lineup of lightning talks I've ever seen and It really stands and represents the conference for me. Of course, we had a different kind of lightning talk here's Gabrielle explaining how to stay safe while you take your tourist selfie with the hippos in the background Here's Samuel one of the UNAM students Who presented some really interesting work? Lots of interesting Outcomes pineapple Namibian Python Society working with schools and students Django girls all over Africa other pipe African pythons are being worked on by people who were there and We did have a hitch student protests hit the conference over registration fees so we were we had to postpone one of the days by a day people said welcome to Africa But Africa is not the only place where people have protests or the only conference that gets disrupted At all by any means but when we realized what had happened it took us 45 minutes to arrange a New venue for 118 people and a two-track conference. So that things really can get done. I only had a Very 48 hours of free time. I took a road trip down to the coast through these amazing amazing Landscapes through things that Were really special so thank you very much If you're interested I'm doing a talk tomorrow on artificial intelligence So the guys in California have silicon rally they have silicon all over the desert so Anyway, Alexander told me that There are some online things where you can rate your talks you visited today your sessions Has anybody already rated the session in the Europe Python application? Yeah, one winner Do we have another winner who already rated a talk and While I do stupid things I can get Please Radomir, can you please set up your system? Okay, so Another question how many robots Does it take to change light bulb? None none is a creative answer who was it with none You were none you you already are why none Because it's a robot that's good enough to win a ticket can you give it in the backside? Yesterday we were preparing for this show this morning and I thought How many robots does this take one robot for all the light bulbs all your light bulbs are belong to us ha ha ha ha Anyway give it a hand to Radomir He will be talking about when fabulous prices and you know what our fabulous prices are So you've seen that probably you've seen the keynote about Micro Python on the micro bit, but the micro Python really works on a lot of different platforms And one platform in particular is very interesting. It was released the port was basically written this year also as a result of Kickstarter and it's this board It's basically the size of a post stamp It costs about three dollars Sometimes less if you if you order in bulk it's good ESP 8266 and It has Wi-Fi on it So it can connect to the internet So it's Pretty cool thing and the micro Python community for that particular board is growing now right now So I thought I will make a contest to encourage this growth even more so It's on the hackaday.io website and You can and you can basically build anything you want any cool project using this microcontroller and using micro Python on it and I will basically choose the project that I like the most and that person Yeah, we'll win another micro Python board this one is Slightly more expensive it has a camera and it has built in a lot of image processing Function on it. It's quite open MV and It's a very cool product that you can use to make your robot track faces or make I don't know camera for your drone that automatically tells you where you are by observing the ground and seeing how it moves or a lot of other things so I have an extra one from Kickstarter and I will basically send it to the person who wins this contest so that's that and so if you are interested in Tinkering that's certainly something to try and also, I want people here on the conference to try Installing and playing with micro Python a little bit more so We could do that at the makerspace maker area back in the Conference maybe today evening after the lightning talks and maybe some other day just Check there. I have a box of cool stuff with me So we can try to connect some wires and so on. So that's it. Thank you. Thank you Oh Excellent wonderful wonderful who thinks she's a winner She's a winner cool. He's a winner We don't discriminate by can you give it back to it cool. We don't discriminate If I ask she's a winner and a man raises it We had Naomi going through all the troubles to raise the number of women in the Python community She told me she changed her sex to get it. So we have a lot of people who are Sex to get it. So we don't discriminate we do it if we ask the question So one thing more have you heard about those stories? When Google bought deep learning and they made a computer play go It's wonderful. It's wonderful. I just was thinking who is giving directions to those developers I have never woken up in the morning and thought hmm It would be so nice if I would have a machine to play go for me I would be fun if they do my dishes wash the bathroom do my tax things But what do they spend their time on playing go? What will be the next the robot playing golf better than me which is easy Why so Submit to the ball the bark. What is the bark? You will be a simulated That's one explanation you get on it. What is the bark? What Borg is a backup solution. So you have somebody you can share no another explanation for Borg think of Alex Martelli The Python singleton pattern give a big hand to this guy Hi, Python singleton pattern by Dr. Alex Martelli Who was it with a singleton pattern? Ah, okay, you already got this ticket great Okay, Borg backup Thomas Wildman give him a big hand Okay, thank you. I wanted to present this backup solution to you The phrase in the middle is from a guy on Twitter It's about a year ago. He discovered attic And it's kind of the father project of Borg backup and he told oh I think I found the holy grail of backup software. So it also applies to Borg What is Borg? About a year ago. I thought the attic project So it's not a new project. Maybe you don't want to use really new backup software, but it's six years old Why did we fork it? It's because the development of the original project was rather slow and pull requests did not get merged and also The original author was not very open to new developers and so on So it was a bit of a pity that it did not proceed as fast as some other people wanted and so the end was we just forked it and That was a year ago and since then the community has grown quite a bit So it's not bus factor one anymore, but there are a few people caring for it We have committed a ton of fixes and have merged a lot of pull requests and also we are inviting to new developers So if you want to hack on it just talk to us and it's a lot faster-paced than the original project When the fork was done it was 600 change sets in GitHub and now we have two and a half thousand change sets So the feature set if you want to make backups you don't want to invest a lot of time So you want something easy and it should also go rather fast Also you want some features for example you want chunking That means to cut the file in pieces And it will also de-duplicate these chunks so it won't store anything Twice so you can save a lot of space. We also have compression the usual compression Algorithms as that for is very fast We do encryption with a s and also on top of that encryption we sign the stuff So nobody can toggle some bits or try to Break the encryption the back end is either file system or to a remote server via SSH It's free and open software. We have good documentation platform support and architectures is quite good. It runs on Intel AMD arm 32-bit 64 bit basically an almost everything It's also support special stuff like extended attributes ACLs BSD flags You can mount your backups with a few files system So you can look directly inside and copy some files out of it. It runs on Python 3.4 or upwards And for the speed we have a little bit of siphon and see We have good test coverage and continuous integration system Some special stuff about a de-duplication It's not just like our thing It's cutting the files into pieces and it's has no problems with virtual machine images It supports bars files. You can also do a whole disk images or logical volume snapshots You can rename huge directories and they will still get de-duplicated The inner de-duplication will work in a data set also historical de-duplication and also De-duplication between different machines even How is it working it cuts the file into pieces it rolls a hash over the file and Every time the least significant bits of the hash are zero it says okay. I cut here. I cut there and so on and this pieces Will get hashed and stored into a key value store Using the hash as ID so you can see every piece that gets the same hash It's just stored once The hash function is also seeded so you can do fingerprinting attacks or something like that So it's also quite secure if you have encryption active. It will not use a hash, but HMAC So there is a secret key going into it. So it's safe 1.0 is released you can get it from different sources open to Debian whatsoever Soon we'll release 1.1 with some new features and 1.2 is the Next bigger change. We will introduce some new crypto stuff and also try to Parallelize more currently it's a single threaded Also, it will get faster in 1.2 as GCM is a bit faster than the current stuff and Yeah, there will be an open space meeting. Just look at the board and also sprints Okay, open space on it you want another ticket if you like So Lasse Schumann could you please set up and Lasse ah here you are now I need 10 volunteers to come to playing kayak, okay? We have three volunteers for volunteers five six. Okay Can somebody help me can can you please distribute those I take away four for the speakers and Alexander Will drop his laptop and Distribute them. Okay all the volunteers for the kayak trip at 7 o'clock this night raise your harm. Oh, that's fine. That will work out there Alexander on the right side we have many people on the left side Make yourself known Alexander. He's moving around great Okay, so While he is distributing them I had Something I was thinking about If a group of people Tries to leave another group of volunteers I'm not sure if I'm going to be able to do that I'm not sure if I'm going to be able to do that I'm not sure if I'm going to be able to do that But the group of people tries to leave another group of people You call them separatists If you have a group of people who wants to stay with a group of people You call them unionists What do you call a group of people who want to leave another group of people to stay with another group? Wonderful I would call them Scottish anyway Our next talker a speaker is lasse Shuman that's on sounds German Thank you, okay, I want to tell you about a project. I'm so excited that I'm currently spending 30 to 30 hours a week in my free time on it So I want to tell you about the koala project and koala is a tool that fuses the problems in your source code and it can fix them as well. So you've probably heard about a lot of those tools to explain how it's different. Let me ask you a question What you want to rewrite library office just to get a spelling correction for Portuguese, please raise your hand Okay interesting so I wouldn't and let's take a look at the word of static code analysis. We do have a lot of different tools. If we look at Python only we have Radon, AutoPap 8 and PyDoc style whatever there is a plethora of tools and as a user you have to learn six tools to cover only one language and as a developer you have to learn a lot of different tools. So I want to tell you about the code analysis in the editor plugin, in the command line, maybe in his continuous integration or directly in github for research he wants to have a JSON output and all that and for most small tools you don't have that because they don't have the time to provide all those integrations. So I want to tell you about the code analysis in the editor plugin. So I want to tell you about the code analysis in the editor plugin. So I want to tell you about the code analysis in the editor plugin. We have a lot of different tools that we have a lot of different tools to cover and so I want to tell you about the code analysis in the editor plugin. cases. And still the user would have to learn lots of lots of tools. So let's put an API in between them. And we call this API Koala, which is the code analysis application. Koala has currently code analysis for 54 programming languages. So how do we do that? Our goal is to reduce redundancy. So we allow you to write static code analysis for Koala without writing a new tool. But we don't want to create new redundancies. So we don't duplicate the existing code analysis. We just wrap existing tools in addition to what we have. There is a... Sorry. Adopted talk. There is also a tool GitMate, which can automatically review your GitHub pull request using Koala because Koala just provides the API. After this talk, I want you to try out Koala if you haven't yet. I want you to tell us especially what you don't like about it so we can improve it. We have an active community. We actually have eight people for this project here at EuroPython. And we will have a sprint. Please join us at the sprints. Everybody who solves a low difficulty issue gets a steam key for free. And last but not least, I want you to keep your passion about programming, about open source software, and drive this community forward. You're great. Keep doing it. If you have any questions, we actually do have one and a half minutes for it. No, it's okay. Just like we talked. Cool. Very good. Thank you very much. Thank you very much, Larsen. You? Good to join us? I think so. Cool. I need Tuna Vagi on the stage. Tuna? Very good, very good. So, with heard of test-based development, we had very, or test-driven development, even better. We had very bad experiences in Germany lately. There were developers of car engines. They developed for tests. And the people in the US were very mad about it. So, be careful with test-driven development. Don't obey the testing code. So, you are on the... Cool. Argument. Yeah. Argument. Oh, he'll get help. He'll get help. Oh, cool. Argument and open source argument mapping platform, which only work with UTF-8. Give him a big hand. Hello, everyone. Has anybody heard of Argument or before? No, not many. Only I see one or two hands. Okay. Argument is an open source argument mapping platform. What is Argument? It is a Turkish synonym of the word argument. Yeah, I'm Turkish. So, Argument is an open source collaborative argument mapping and analysis platform. So, what is this? What is Argument mapping first? It's a visual representation of critical thinking. So, it's basically discussion platform, but visually a little bit different than conventional discussion platforms. It looks like this. So, you have an argument like you see in the... There's some premises because these are two supporting premises to this argument. And there's a however premise under which is a kind of a supporting premise for the one before, the one here up. These are both fruits, for example. Apples and oranges can be compared and this guy supports. He says they are both fruits and this guy says while they must... They both be fruits, round, derived from the same text and blah, blah. So, this is another... So this is a tree structure as you can see. So there's a... Somebody come. Also, you can log into the platform and just enter your argument if you have any. So, AI should not be enslaved, for example. And this guy starts to discuss about that. And these are for example opposition premises. The guy say no and this guy say supporting argument to this no. And after all, you see here's a conclusion. 81% objection rate. We calculated objection rate with an algorithm. And basically this... For example, plans should have the right to vote and this is a support premise for arguments and this is an objective. Yeah. And this is another argument, for example. And this is how we're promised. Yeah. So there are also fallacies defined. So if something has no argument value, for example, like this, you say this is a policy. Don't do that. And these... After some point, these policies make this argument invisible. So, because it's not an argument. Like this. And we closed on down bridges. It is kind of a scene but like almost invisible. So this is the objection rate, rate like or supporting rate. We calculate this to the value of the premises. And there's also semantic network between arguments. We kind of use the word net for getting the... getting the words from this argument and teaching them into the platform. And after some while we know which and we categorize them automatically. So like this. After while you say that AI is belong to this artificial intelligence category and is a computer science. It's under computer science and blah blah as you can see you can check the platform. And right now it's supported in four languages, Turkish, English, Chinese and French. It's an open source platform. You can contribute whatever you want. This English... No, we did the English part but this Chinese and French translated like opens a bit support of open source guys. Now development is this address you can find the GitHub, the repo story. There are 600... I updated this today. So these are the statistics and... So what we use Python, Django, text blob and LTE and Unity code. And Ginex, GoniCom, PostgreSQL, MongoDB and wordnet for lexical dictionary. Thank you very much. You can find them on GitHub, you can like them on Facebook, you can follow them on Twitter. Thank you very much. Thank you. You will join us? Maybe. If not find somebody who will join us. Okay, can you please come up? Wonderful. Wonderful. You need a microphone. You don't have a loud voice. We have a microphone here. We'll switch it on. Look at this old man. He'll take five minutes to come on stage. I'm just coming. It's just all this conference going where he does my back-end. You have to sit down and you stand up and you have to get to the talks and you lie in an uncomfortable bed in a cheap hostel and gosh, I wish there was some sort of solution to all the stress that we have from programming, fixing other people's bugs, fixing most annoying of all your own bugs and some way of de-stressing after all the programming and also maybe making a donation to the PSF. Thankfully you'll be pleased to hear that there is a solution to this problem. As every year, normally organized by Rob Collins, there are charity massages going to happen this year in the name of the PSF. Fabian here and myself will be going around giving away free massages and you can give a donation and get a massage. You can also give a donation if you don't want to receive a massage and you would like to avoid it. That is also the important thing is the donation. If you'd like to help with that, it's a charity collection for the PSF. We're going to be collecting that money at the social event on Tuesday and during lunch slots. We'll be training anyone who wants to volunteer and help give out massages or mug people into giving money to not get a massage. We'll train you in a professional massage course in the course of about 15 or 20 minutes, professional massage course given by entirely professional massers and that will be happening tomorrow in the lunch break immediately as lunch starts. Because there are long queues for lunch, if some people want to come and learn massages, we'll spend 20 minutes doing that and by the time we've all learned to give massages, there will be no more queue and we'll be able to have lunch. That is the message. Free massages for the PSF, come and learn how to do it in the lunch break tomorrow immediately as the lunch break starts and we'll be around the corner here outside the pie charm room with the lovely view over the river so it'll be de-stressful already, de-stressing already. How about that? So that's the idea. Thank you very much. One more thing, one more thing. I have one more message. Walking around the conference, you may see some people wearing badges. I, for example, am wearing a badge with a little snail on it and the badge says, I'm a beginner mentor asking me anything. There are also some people here running around with similar badges that have a little python snake on them and those badges says, I'm a beginner, be nice. So there's a surprising number of beginners at these python conferences and the people with beginner mentor badges have said they are entirely happy. I mean, everyone in this room is happy to answer questions from beginners but people wearing these badges are saying, look, if you're a beginner, if you have what you think is maybe a stupid question, these people have just said, look, I am totally happy to stop what I'm doing, stop my conversation that I'm having right now and answer any questions you may have. Up to and including like where is the bathroom, what time is the next talk. But you know, you have questions about a talk you just saw, you feel like everybody must know the answer to this but I'm stupid so I don't. Wonderful. Wonderful. And I have to ask you for an extra applause because I'm restricting myself from using the words happy and I'm also restricting myself from using the word ending after that presentation about a massage course. So please give me a big applause for my restriction to keep this conference civil. I'm not using that word. Excellent. Michel, you will be talking about, thanks for the python three. Can you help him with the thing? I have not enough jokes to get this projector running. I tried this before and it was working. Yeah, yeah. I'm hearing this story since 2006. That's great. I've been to, oh, great, great, great. So while he's figuring out how to destroy it again, I've been to many conferences. I always met the guys from MongoDB and I was so jealous because they had money to burn for marketing and everything. I've been hearing that for 20 years. Just press close. I cannot even reduce this. I can't even. I can't just add more. One more left. That's the video. That's a new game. Don't worry. Hit the close fast enough. I will talk. Michel, give him a hand. This is not prepared. We'll take less than five minutes. Probably two minutes would be enough. But I couldn't say, because all I want to say now is thanks to these guys here, which are the scientific Python community, essentially the people are doing the Python, Jupyter and all the scientific stuff. Because in this Python tree statement that I discovered just three days ago, they state that now because the reason why I discovered this is that I was checking if I Python 5 was out or not. It was out and also they were telling that this was the last version to support Python 2.7. The next major version of Python will be Python 3 only. And not only that, they made this Python 3 statement that a number of scientific projects are signing this statement saying that essentially well before 2020, as you know, 2020 is the end of the life, at least officially, it's the official end of the life of Python. We know that it will continue for the next 50 years. But officially the Python software foundation will not support any more Python after 2020. But these guys here, the scientific guys, they will remove the support before 2020. And for that, I thank them, because I don't need to wait for years because I am an all-time Python programmer. I started more than 10 years ago. So I remember the time when there was the mythical Python 3000. It was not even called 30 at the time. It was mythical Python 3000. It came out in 2008. We are now in 2016. And I still I couldn't never program professionally in Python 3. Now, after this, I think probably in the autumn, I think within this year I will probably switch to Python 3 because I work as a scientific programmer. The scientists that they work with us, I'm in the DIT group, but the scientists are using the Python, the notebook now called Jupiter every day. So they need always the latest version of the Python. So this is good reason for us to migrate. And this was good for me because I'm just in the more than the middle. I already done the first 80% of the migration to Python 3 of our core software. And now also the scientists will have to change all their notebooks etc. So I'm happy because I started the work and now I can say, look, I told you it was the right time to do that. So really happy about that. I don't know in the business community, probably in the enterprise program, you will have not this lack every continual few years, but I see that the tide has changed. Now essentially all the scientific software I use has been already ported to Python 3. So we will also do the migration. And I see that everybody can sign here this agreement, this statement, sorry. And maybe we will also sign this when we need to have a plan with date and say, our software will stop supporting Python 2 within this data. I don't know when. So but this is a good thing. Another thing I will talk on Friday here. And everybody who is interested in scientific Python, high performance computing in Python, cluster, distributed computing, etc. can talk to me and I'm available. Thank you very much, Richard. Will you show us? No? Okay. That nearly concludes today's lightning talks. One announcement. Tomorrow, Harry, can you please stand up? After he spent you with massages, he will present tomorrow's lightning talk. Please give him a big hand. And if somebody still is available at 7 o'clock, okay, the young fellow behind there, just come in front of the end of the thing, join us in the lightning talk. So those who will join us, leave your electronics at home, especially you, Radomir, if you want to sell them, leave them at home. Come in something that can get wet. And enjoy your evening. See you tomorrow. Thank you very much. Bye.
Various speakers - Lightning Talks Lightning talks, presented by Harald Massa - Larry Hastings - My life as a MEME - Javier Domingo - Python exp! - Danielle Procida - Python Adventures in Namibia - Radomes Dopiralski - Win Fabulous Prizes - Thomas Waldmann - Borg Backup - Lasse Schuirmann - Cola - Lint and Fix All Code - Tuna Vargi - argüman.org - Harry Percival & Fabian Kreutz - Sponsored Massage Training, in Aid of The Python Software Foundation - Michele Simionato - Thanks for the Python3 Statement
10.5446/21258 (DOI)
Alright, come in everyone, lightning talks start at 5 o'clock, the best place to sit for the lightning talks is right up at the front, especially if you're a speaker. If you're a speaker in the lightning talks, I definitely need you up at the front. If you're not a speaker, there's still room at the front, or the lovely second row. Mmm, secondy, mmm, second. Okay. Remember how squirrels pack nuts? Fill up the rows from the middle first. Welcome. First person on my list is Cast Vochis. Are you here, Cast? Good. We're going to come and start setting up. Great. Doo doo doo doo. Number two will be Fondo Batista, or something to that effect. Fondo, are you here? Very good. Yes. Alright. I'm just going to go back over here and check number three now. Some people would check several at once. Paul Hallett, Paul, are you here? Yes. I can keep doing this actually. Rafael Schulze. Yeah, okay, you're pretty close to the front. I'm going to allow it. Alright, you don't have to sit in the front row. That's great. Thanks, Rafael. Ben Foxle. Ben Foxle. Ben Foxle. Alright. Hey, hey, hey, someone's going to get bumped off the list. That's great. We've got loads of lightning talks for you today. Alright. Come closer to the front, everyone, if you've just walked in. There's loads of room up at the front. When you're close to the front, you can see better. The person giving the lightning talk feels like you're more engaged and more curious about what they have to say. It makes them happy. A happy speaker delivers a better talk and a better talk informs you more and makes you more happy in a virtual, oral-borosircle of knowledge, transmission, passion, community and fun. Come close to the front, basically, is what I'm saying. Yeah, that person is right. If you close the display settings app, that little warning will go away. Oh, we can leave it. I hope. I don't know. Lightning talk, sir. A succession of talks interspersed with people heckling from the audience about how to fix your display settings. Come on, lots of room near the front, everyone. The front is the best place to be for the lightning talks anyway. You get to see more, you get to learn more, you get to feel more. The whole thing is more engaging. The day ends on a high note. You go home happy and the whole conference turns into way better value for money, which we can't possibly argue with. Come up to the front, lots of room at the front. Also a bit of room here in the middle in a little enclave behind the cameraman. That's probably because people don't want to sit behind the camera. Okay, I'm looking at a camera. I want to look at a talk. Come on in. We're going to start in about one minute. There's not a lot of time now. If you're going to come to the front, it's right up here. Come on, walk, walk, walk, walk, walk, walk fast and safely towards the front. All right. Now, we don't start until exactly the time. You can, no, it's not my watch. My watch is the only watch that matters. You can start heckling me when the official lightning talks have started, but it's not fair to do that before. Get out, you. Hi. Okay. We're about to start. So if you're now walking in, I need you to start walking in much more quietly. So tiptoe to your seat from now on as we're about to start. Same lightning talk rules as Harold. One hand for little finger claps, two hands for final claps. You have up to five minutes. You don't have to use the whole five minutes. Cast, take it away. Thanks. Everyone can you hear me? Okay. I'm Kars Tjairtjes. I'm here to raise awareness for an open source project that we developed. It's called BQuery. I want to use these minutes to explain what it does, why it's really nice, and I think more people should be aware of it and possibly use it. I co-founded my company three years ago, and we started and we had some problems. We had actually clients from the start. They were retailers with billions of records of data, but we were hardly any money, and we still had to be able to respond to that. So we had this technical issue that we had to address. And that's how BQuery in the end came to life through various other parts in between. What does it do? It basically runs everything that you see here in the back. I have actually something like 20 million records here in the back, and it runs through it, it aggregates basically on the back end and makes it possible for us to be very efficient in terms of aggregation and reporting. So that's basically what BQuery does. As you saw, the front end is HTML. It's what our client use. The whole entire back end is basically Python. So that's what we do there. It's part of a larger part of which we'll also outsource later on, but in the start I want to start with BQuery. Why did we make it? There are several solutions out there, like, of course, Hadoop. Spark did not exist three years ago yet, but there are other solutions, but especially it's a very small startup. They are very hard to administrate. They could be very expensive, use a lot of resources, and at the same time there were more technological developments in the background. I think the lowest-rows, the most important one, which you see actually with MongoDB, WireTiker, and the moving towards compressed on-disk saving, that also was happening in the Python community because of these two men here who made B-Calls. Unfortunately, they're not here at the moment, but they also helped us a lot, really great guys. What B-Calls is, it's basically compressed data containers. It takes data, puts it compressed on-disk, which means that it sends its zip to your CPU, it unzips it there, and the idea is that modern CPUs are so quick that you actually overcome the memory bandwidth issue and are nearly as quick as it would be in memory. That's actually true. The only problem with B-Calls is that it does not aggregate anything. It's basically bare-bones framework for compressed data and for reading and writing that compressed data. That's where we made B-query. B-query is basically on top of B-Calls, and it makes you, well, this is our poster slides on slides here, so you can read this later on, but basically what it does is it's an aggregation framework, and it's rather fast. We have comparisons with pandas, for instance, compared to pandas in memory, it's like 1.5 to 2.5 more times slower. That's, of course, comparing your on-disk aggregations compared to what pandas does. There's also, the sources are down there. There's some examples of a New York taxi dataset with DAS where you can see basically it's being spread over Hadoop cluster of eight machines with 30 GB per machine, eight processors, et cetera. And what it basically does, and I hope it scales well, what it does is of my own quad-core machine now it's doing in 1.8 seconds. It does that in 0.5 seconds with the same eight machines. And what you cannot see at the moment, but it uses less than 1 GB of memory and runs this fast on my own laptop. So that's basically what I thought was maybe interesting for more people. And it's downloadable. Only if you have PIP8 it won't work at the moment because of some strange reason. If you have PIP7 you can still install it otherwise from the source, from GitHub directly. We're still working on it, but here, of course, free to look at it and also join the project if you want to. So that's basically my short introduction of BigQuery and what it does. Any questions? APPLAUSE Thanks very much. Next is founder. How's everyone enjoying that conference so far? APPLAUSE It's cheap, but it works, man. Like a free whoop. Just do it again! APPLAUSE Are you ready to go, Fondo? Without further ado, Fondo, but just to give him a big hand. APPLAUSE Thank you. So raise your hand if you use virtualenv. Right, a lot of people. Virtualenv are awesome, right? I mean, you have a lot of benefits. You can install whatever you want without making dirty or installation, and you can reproduce environments from one computer to the other, et cetera, et cetera, et cetera. I don't want to sell you virtualenv. What is the problem with virtualenv? We need to manage them manually. So it's not a problem really for when you're working in a big project because you enter in the virtualenv and you are there eight hours, but what do you do for the scripts? You have 35 scripts on your computer, and do you install the dependencies in one virtualenv for all the 35 scripts, or you have 35 virtualenv once for each script, and you have to remember which virtualenv was before executing the script and remember to enter the virtualenv before executing the script, getting out of there, entering, et cetera, et cetera. What do you do in that case? It's a problem. It's a mess. So, faith is here to help us. With faith, it's very simple. You only indicate the dependencies that is all you care about. You care about your script and their dependencies. You only need to tell the dependencies, execute it, and nothing else. How you execute it? Well, very simple. As any other script, we're calling it with faiths, or put faith in the Shevan, or even as a Python module if you want, you can call it and you can execute with your script, and you only need to do specify the dependencies. How do you specify the dependencies? Well, it's very easy. For example, if you want to use a script in a virtualenv that has requests, you just call faith-d-request, and faiths will execute your script in a virtualenv that only has requests installing it. If you call this, if you are in a clean machine, it will create a virtualenv, it will install requests, calling pip, et cetera, and execute the script there. The second time you do it, it will be super fast because it already has a virtualenv with requests installed. If you want to execute another script with only requests, it already has the virtualenv. You don't need to care about anything. If you call faith-d-request, your dependency and don't specify a script, it opens for you an interactive interpreter. This is the quickest and easiest way to try a new library. Did you try this library? No, I didn't. Oh, faith-d-library, and you have an interpreter and you have an interpreter that lives inside the virtualenv with that dependency installed, and you just try it. You don't need to do anything else. If you like to use iPython, you just tell it to use iPython. If you want to use any specific Python version, you just tell it to use any specific Python version. If you have several dependencies, you just call dashDseveral times. If you have to put one of the dependencies, or several of the dependencies in a specific version, or greater than, or less than, whatever, you just specify it. If you have a requirement.txt, you have tell it with dashR. So that were the simple ones. Let's level up. The simplest way to work with a script is this one. You have your script, you put faith in the Shevan, and how do you specify the dependencies? See that Mashiq commentary fades in the import. Fades will execute this script in a virtualenv that only has requests installed, because you tell it through a commentary. You can tell the commentary in several ways. You can put it in the doc string, you can put it in the faith Shevan line, etc. There are several ways. For example, another complicated task, to start a Django project with a version of Django that you don't have installed in your system. How do you do that? Well, with faith it's easy, because you call faith, telling it that dependency is Django 1.8, and with dashX you execute something inside the virtualenv. You are executing the Django admin of the version from the virtualenv, and this way you can start the Django project in that specific version. For example, if you have a specific PIP requirement, because you have a proxy or whatever, you can also tell it to faith to teach PIP how to work. Keep calm and use faith. You can install it very easily. If it's already in Debian, it's already in Ubuntu, it's already in Arch, you install it and use it and now more virtualenv manually handling. Thank you very much. APPLAUSE All right, come on in. There's still a few people coming in, everyone, so make some space at the end of the rows. If you spot there's space in the middle of the row, everyone move two seats towards the spaces. Make it so that people can actually come in, make yourselves helpful there. There you go. That's one space liberated there, that's fantastic. There you go, there's loads of people sitting down at the back of the room. They could easily be given a bit more space. Oh my God, the power! Everyone is doing what I say! Ha ha ha ha ha! There you go. All right, next is Paul Hallett. Paul, you're here somewhere. There you go. Yep, that's good. And then after Paul is Mr. Schulzer. Fajal. Yep. And then after that we're going to have Ben Foxle. Ben, did you arrive? Good. All right, come a little bit closer, closer forwards. There you go. Are you ready to go, Paul? I am. In that case, give him a hand. Hi, everyone. So my name is Paul Hallett. Does anyone here use Django? Yep. Does anyone know that the Django Software Foundation has a Code of Conduct Committee? A few people. Okay, so I'm a member of the Django Software Foundation Code of Conduct Committee. I'll call it just the Code of Conduct Committee from now on. And I have an announcement to share from Django and the Code of Conduct Committee today. But for those of you who didn't put your hand up, tell me a little bit about what we do. Pretty simply, we are responsible for making sure the Django community provides a harassment free experience for everyone involved. We have members that are global and representative of you, the community. We've got people from different backgrounds, different experiences. And as I said, we have something quite big to announce today. And that's our Code of Conduct documentation. This isn't the Django Code of Conduct. I'm sure you're all aware if you've ever been to a Django conference that we do have a Code of Conduct. But this is documentation on how we actually deal with the processes of receiving issues and making sure that the Django community is friendly and safe and inviting. Before I get into why we decided to open source this, I want to tell you a little bit about what's involved. The first section is about membership, how we elect members, how members can maintain their status. We are trying to promote like a non-burnout style membership. So people opt in and they say, I'd love to be part of this for six months. And then after that, they have no obligation to stay. We also have the most important part, which is how we handle reports. We take each report as seriously as the next one and the previous one. And we have processes where exactly how we go through this. So people who feel like they're uncertain about making reports can see the exact processes we go through to make sure that we handle those issues seriously and justly and fairly. We also mentioned how we keep records of reports. There's no point in us actually running this committee if we aren't able to collaborate with Django organizers and other organizers of Python events if we don't keep a record of those reports. And that includes some processes around anonymization as well. You may not realize this. Not many people put their hand up that they actually knew we had a committee. So very few people even realize this, that we actually work with the organizers of every single official Django conference to make sure that we share reports between them. So they share with us the attendee list and if there's any reports we're able to share them with other communities. And finally you can also find our transparency and statistics there. This is the actual figures of the number of reports we've received over the past two years. As you can see we've gone from 11, two years ago to seven so far this year. So hopefully we'll be able to bring that down and ultimately make ourselves redundant and make the community friendly and inviting. The reason we did this, there's three reasons. The first one is obviously to help hold ourselves accountable to you to make sure that we're providing this safe environment for people and also to let you understand how we make these decisions. The second reason which is closest to my heart is to help other tech communities who have not been able to adopt a code of conduct to show them that you can do this safely, you can do it fairly, that we don't witch hunt. This is actually done through a very democratic process. And finally to get feedback from you to understand if we're doing a good enough job, if we're making it more well understood and generally to get your feedback on how we can run this better. So a big shout out to these people. The reason my name here isn't because I'm pretentious, actually it's because Ola Satarska wrote these slides and she's announcing this exact same lightning talk right now in Philadelphia in the USA at Jengarcon US. So we're launching this right this second. So it's there, go and have a look, give us your feedback and I look forward to hearing it. Thank you. Thank you. All right. Who wants to hear a story about a squirrel? I tried to, I thought we could just do jokes but I haven't actually got any new ones in last year. So instead I've got a funny story about a squirrel that I'm going to try and read in an entertaining manner. It starts off and we're not going to have it for long. I never dreamed slowly cruising on a motorcycle through a residential neighborhood could be so incredibly dangerous. Little did I suspect. That's your intro. Let's move on to Star Wars word clouds. Hi everyone, I'm Rafael. This is a fun project that I recently worked on. Why? Just because I like Star Wars and I like to play around with data so I thought it was a good idea. I also think that these very small projects are a neat way to just explore basic data science concepts and the capabilities of Python libraries. So in particular, I guess with this I just want to encourage maybe newcomers to Python or to data science to do something similar. In particular, I wanted to test this library which is the word cloud library. I don't know if you know it. This is from Andreas Müller. If you don't, then check it out. So the first step that I did was of course to get some data. For this, I went to this web page which is basically the database which contains movie scripts. This is a partial screenshot of how this looks like. Of course, what we need to do here is extract data that is actually being spoken by the people because we want to create word clouds from Star Wars characters. So with a bit of introspection of the HTML and the help of beautiful soup, we can just parse the HTML. This is an example code for episode five and you hope. So we basically get the HTML using requests, parse it using beautiful soup and then we iterate over it, extract the code for each character that is being spoken and then we end up with a dictionary where basically the keys are the character names and the values are the strings of the text that they spoke. I'm not going to go into details here. This is an example output of Darth Vader from episode four. As you see, this is raw text. So the next step is basically to go ahead and clean this by basically doing things like removing punctuation, removing stop words, for instance, lower casing it. You do this for each character across all episodes and then you end up with something like this, which is a nice string of just words. Again, the example of Darth Vader. There's bits and pieces that you still need to do like, you know, merge dictionary entries that, you know, the same character like Luke and Luke's voice or C3PO and 3PO, stuff like that. And this is the list of the top list of the characters by word counts. And then we're ready to create our word clouds. For that, it's as easy as instantiating this word cloud class with basically a list of words and their frequencies. And so let's play a game. Who's this? Come on. That's Luke, of course. Who's this? That's easy. 3PO, exactly. What about that one? Someone said Yoda, right? That's Yoda. Well, now an easy one, easy one. This one. Right? Han Solo. Okay, last one. Who's this? Of course, our all-time favorite character, Star Wars. Exactly. So what we can do is basically do, we can also pass, for instance, image masks to this library and create more beautiful stuff. So we come up with something like this. You recognize this. Here's Yoda. This is Padme. That is Obi-Wan. And of course, Darth Vader. Of course, this is just the beginning. There's a lot of more you can do with this data, for instance. Use TFIDF instead of just word frequencies. Apply machine learning. You know, try to classify maybe dark and bright side, whatever. Do some network analysis of the interconnectivity of the different characters in the movies. I am New Cortex in Github. If you want to check the notebook, it's here. Star it, fork it, clone it, run it, whatever you like. And thanks a lot for your attention. APPLAUSE All right, Ben Foxle next, and then Christiane Stefanescu. Christiane, are you here? All right, you're ready to run up. After Christiane, we'll have Jonathan Slenders. Jonathan, are you here? Very good. I was on Bryce Street, a very nice neighborhood with perfect lawns and slow traffic. As I pass an oncoming car, a brown furry missile shot out from under it and tumbled to a stop immediately in front of me. It was a squirrel. And it must have been trying to run across the road when it encountered the car. I really was not going very fast and there was no time to break or avoid it. It was that close. I hate to run over animals and I really hate it on a motorcycle. But a squirrel should pose no danger to me. I barely had time to brace for the impact. Animal lovers never fear. Squirrels, I discovered, can take care of themselves. Are you ready? Yeah. Give them a big hand. Thank you. Thanks for that intro. That was perfect. Cool. So I built a little app over the last couple of days and I thought I'd show you over here. I actually built the last commit in seven minutes ago. So I've not seen if that works, but hopefully it will. So this site is a kind of a multi-device site. So what I wanted to do is get out your phones and stuff and visit this URL, which is eupy16.horocuapp.com. And we can see this device count going up. And this is a tiny little Flask app that's using our service, which I won't talk about just now. So we're getting 21-ish devices that's going up and down. It's kind of weird. And what you should see is you should see this kind of grayed out logo. And what this tool that I've hacked together is, is basically a kind of logo designer, I think. So what we can do is each of our devices is connected to this Flask app and through our infrastructure. And I can send out messages. So basically let's choose the first color of this logo. So let's choose green. Woo! Cool. So that kind of works, which is good. I wasn't totally sure about that. And yeah, we can choose these other colors as well. So maybe green and blue. And you should see that updating on your phones and laptops in real time. I've put some buttons down the bottom, which have emoji in them. So the first emoji, this chooses a random color scheme for those two things. So you should see a different color scheme for each of your, from your neighbors. And I can press this a few times and you can get some new ideas for your logos, right? This speaker button, why don't you just put up the volumes on your, on your devices? Cool. You'll get more of that. Okay, so this is using the Web Audio API to synthesize a note that's randomly chosen on your device. Okay, slightly linked to the last talk, we're going to make this a bit more real. We're going to use these two choosing colors and sounds to select a winner. Okay? And the thing you're going to win is this BB-8. Woo! Cool. So what's going to happen is everyone's phones are going to change colors and they're going to play notes and they're going to get a bit more kind of chaotic. And then eventually one person will have the kind of, like the proper logo colors everyone wants to have gray and that person will have one. So if you hold up your phones and turn them to the center of the room, that would be, that would make it kind of cool, right? So is everyone ready for this? Where are you? Wherever. Cool. So let's start this and... Is this going to finish before the heat death of the universe? Just have you checked? It'll be like 40 seconds. Cool, so it's kind of in time. What is happening? What is going on? It is a bit long. Anyone? Winner? Yeah! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Woo! Cool. So thanks a lot for that and we're going to have a session tomorrow at lunchtime if you're interested, show how we build this, or whatever, have lunch. Thank you very much. Thank you! Christian? Thank you. One of the organizers there, did you want to say something Fabio? Did you want to say something? I saw you creeping up, no? Okay. Oh, you have a talk? That's even better. There you go. Are you just ready to go Christian? Yeah. Then take it away! Woo! First of all, sorry I'm super nervous. I usually don't speak in front of more than, you know, three people. My name is on the slides and this is Namako. Namako is a Japanese word. I looked it up. It's basically a fungus which you use to make miso soup. So I'm sorry I pretty much gave it all away. I'm probably, you know, I'm going to talk about the microservice framework. Yeah, so a bit of background we had in our company a big change in which we decided to go for, you know, the fancy stuff you built today, an API and some microservices. And then we were looking around for the fitting framework to build our microservices and we found Namako. And although I'm quite nervous still, I hope I can convey some of the excitement I had when I found Namako. So Namako can do RPC or event-based kind of communication. I'm going into these patterns in more depth soon. If you don't know what that means, it uses Eventlet under the hood for async worker handling. It uses dependency injection and it uses it for so-called dependency providers. These are all sorts of things you can plug into your Namako services like blogging or think of it like some sort of middleware. And then it uses extensions and it uses extensions, for instance, to define the protocols to transport messages. So let's look at RPC calls and this is the main part we're using Namako for. Okay, so I already said microservices. So in the grand scheme of things, there's a method somewhere out there on the Internet, probably in some sort of service which I want to call. And I'd like my code to be nice and readable and this is actually doable with Namako. So this could be the code you could be writing tomorrow. Under the hood, in the middle layer, let's think of Namako as, or actually Namako does this. It serializes your call to JSON, sends it over RabbitMQ. Let's see, RabbitMQ protocol implementation and calls the service, then creates a new queue which will hold the result and returns it. So that's it for RPC. Now let's look at the code and this is the part I like the most, I think. This is just a class and this is just a method. And I think the only thing that's kind of unusual besides the import of some fungus is the decorator at RPC which transforms this plain Python method into something that actually works remotely. I was thinking about the parallel to Java but I quickly dropped it so Java developers call Pojo's or so-called plain old Java objects. But I think this doesn't work for Python objects. So anyway, it's a simple Python class. And if you're like me and you like testing, you probably like this as well. Look at how nicely the method doesn't contain anything about the transport. Okay, now for the RPC part, I also like the tooling a lot. In the upper left corner, we have a helper I can call from the command line which will run my service. It will bring it up and I can start calling it from, and that's the part in the bottom right, a show. So this is basically a Python show which is configured to talk to this said service. So interaction is quite fast and you can test stuff and you can also use this in production to talk to your services in production. Now, I said events are also possible. So for the scenario, I have an event handler somewhere in my service. I can write this kind of code, I instantiate an event dispatcher and send it a hello. And I also can pass a payload. And then, yeah, sorry, there's no response, of course. I'm just sending the event. Let's look at the code again. Again, the import is missing also. But the decorator is the only thing that distinguishes it from a simple Python class. There are three types of event handlers. I'm going to put them all there so you can see them. So maybe Singleton is the easiest one. You want exactly one delivery. Broadcast means you reach all. And a service pool is actually just one out of a cluster of services. It can do HTTP, but I wouldn't. Thank you very much. That was Chris Jenner. I've got Jonathan coming up on stage. Fantastic. After that, it will be Mattias Rav. Inches before the impact, the squirrel flipped to his feet. He was standing on his hind legs and facing my oncoming victory cross county tour with steadfast resolve. His mouth opened at the last possible moment. He screamed and leaped. I'm pretty sure the scream was squirrel for Banzai or maybe die you gravy sucking heathen scum. The leap was nothing short of spectacular. He shot straight up, flew over my windshield and impacted me squarely in the chest. Instantly he said upon me, if I did not know better, I would have thought it had sworn he brought 20 of his little buddies along for the attack. Snarling, hissing and tearing at my clothes, he was a friendly of activity. A friendly of activity very much like Jonathan who will give us a short lightning talk about Prompt Toolkit. Thank you. So this is a very short presentation about Prompt Toolkit, which is a library for building common-plan applications in Python with a very strong focus on usability. So for that we go back to the normal Python shell, just to have a short demo. So let's start a Python shell and do some Python coding. So for instance we can do an F test and then print Hello World just as a demonstration. Right? Everyone has done this. Now the problem is, the point we have to execute this again, what do we have to do? We have to press the up arrow three times like this. We have to press the up arrow again three times. There we go. Once more and then we can execute it. This is a bit annoying. Even more annoying is if we're at this point and we have to insert a line, we cannot insert a line right below the F true. The only thing we can do at this point is press Ctrl C like this to interrupt this. Fetch the lines again from the history and while fetching them, insert the lines. So that's really annoying. Now coming back to Prompt Toolkit, that's a library I've been working on for the last three years. And about one year and a half ago I released a tool called PT Python. So it is a Python shell built on top of Prompt Toolkit. Now let's do the same thing here. We print, we do an F true, we print hello like this, we print world. And you see, as I type, I have syntax highlighting. And also I have very nice code completion. Now, if we have to execute this again, the only thing I have to do is press the up arrow only once like this. There we go. Now, even more, we have multi-line editing. That means we can navigate in two directions. We can move the arrows up and insert lines in between here. And we can just execute the whole block in one keystroke. So Prompt Toolkit is a library that implements those kind of things for people who want to implement interactive applications at the command line. Prompt Toolkit does most of the read line functionality. So read line is the library that's used by the native Python shell. It's used by many command line applications. And most of the read line functionality like VI key bindings, Emacs key bindings, reverse incremental search, all those kind of things that you expect. And we're implemented in Prompt Toolkit. So we can search back in history and execute from the history. Coming back to my presentation, we have seen this. So lately I got in touch with the iPython core developers. We collaborated a lot. And this resulted in iPython 5. So maybe you have seen it. It's released a few weeks ago. And iPython 5 has a front-end built on top of Prompt Toolkit. So that means that the functionality that you have just seen in PT Python, like the syntax highlighting, like the multi-line editing, it's all present in the latest version of iPython 5 with Windows support, Mac support, and Linux support. Further, the only thing I haven't said is yet is that we support bracket of paste. So that means if you're pasting a chunk of Python code, it's recognized as being a paste and it won't execute it yet. It's inserted as one block of Python code in a multi-line buffer so you can edit it before you execute it. And also means that the type, the sort of indentation will be kept like it was. And further, we have more support. And in the last year, many tools were created using Prompt Toolkit. So this is a list. The list keeps growing. Many people start creating tools on top of Prompt Toolkit. This is one of the last one, HTTP Prompt, which is a combination of HTTP and Prompt Toolkit, which is a nice tool to do HTTP get and post requests. These are a few others. There we see a few database clients like Pidget CLI and MyCLI and AWS Shell to do interaction with Amazon. I've even two full-screen applications. So PyVim, for instance, which is an implementation of VI in Python. It's more as a proof of concept, but if you want to play with it, it's fully functional. Not all the functionality of real VI, but it is usable. And we have PyMux, which is a clone of T-Mux and pure Python as well. That one is very usable. I use it all the day as a replacement for my T-Mux. And every time when I miss some functionality, I add it to T-Mux or to PyMux. So that's it. If you want to find me, you can find me on GitHub, on Twitter, here at the conference. As a reminder, we have the PIP installs. So you can PIP install iFython, PIP install pppython. Thank you. Thank you. Next is Mathias Rav. After Mathias, we've got Juan Luis Cano. Snarling, hissing and tearing at my clothes, he was a frenzy of activity. As I was dressed only in a light T-shirt, summer riding gloves and jeans, this was a bit of a cause for concern. The furry little tornado was doing quite some damage. Picture a large man on a huge sunset red touring bike, dressed in jeans, T-shirt and leather gloves, puttering along at maybe 25 miles an hour down a quiet residential street and in the fight of his life with a squirrel. And losing. Mathias, you ready? Yeah. Hello, everyone. I'm going to tell you about a new feature of Python 3.6, which you may have heard of, maybe not. It's mostly for those of you who haven't heard of it. So a quick show of hands. How many are reading the Python 3.6 release notes draft? Okay. That's nice. So maybe half of you will learn something new now. It's called literal string interpolation and it's a hep 498. So suppose you have a function, a simple hello world function. You can use string interpolation to insert arguments into the stuff you're going to print. So it says hello world, hello Harry. You should probably know this. And the thing about string interpolation is if you have an application where you're doing logging in the real world, maybe some of your log lines are really long and have several things to interpolate. Basically, we would like to be able to do the same thing as we can do in the shell and in Perl and in PHP. That is insert variables inside the string literal and have them be inserted where they are. So my second example here uses the dot format, which looks like what we want. But again, the problem is that I have to specify the variables to interpolate next to the string literal and not inside of it. So the solution is the literal string interpolation feature of Python 3.6. And you basically put an F in front of the string literal, so it's called an F string. And you can put any kind of Python expression that you want and it will just be evaluated at run time. So in this example, the name greeting will be taken from one of the arguments and target will be taken from the arguments. And we call the title string method to uppercase the word. So I started playing around with this a bit and I've made a small program that will automatically apply this transformation so you can upgrade your code to use Python 3.6 as soon as it comes out, if that's what you fancy. So I took a real world example from one of my machine learning hand-ins. I have some gradient descent algorithm I've implemented and I'm writing out how long into the method are we and which iteration is it and what's the current cost and blah, blah, blah. And basically I have a program that can automatically turn this into the F string below where you can see how we do number formatting just like the old format method of strings. And it has currently, I have just integrated this with Vim because that's my choice. You can blame me all you want. It works like this. You can select the text you want to transform in your editor and press the equal sign that does this transformation. And there you go. It has turned the print statement into something that uses F strings and we can sort of tweak it from here. And that's all. If you're interested, the code is on GitHub. I'm mortal on GitHub and you can go check it out. And it's a small project using the AST library that does a simple parse of your Python input file and simply looks for the percent formatting expressions. That's all. Thank you. And after Andrea is Dave McIver. I grabbed for him with my left hand. After a few misses I finally managed to snag his tail. With all of my strength I flung the evil rodent off to the left of the bike almost running into the right curb as I recoiled from the throw. That should have done it. The matter should have ended right there. It really should have. The squirrel could have sailed into one of the pristine, kept yards and gone on about his business and I could have headed home. No one would have been the wiser. But this was no ordinary squirrel. This was not even an angry ordinary squirrel. This was an evil mutant attack squirrel of death. Twisted evil. He caught my gloved finger with one of his little paws and with the force of my throw swung around and was a withounding sump and an amazing impact landed squarely on my back and resumed his rather antisocial and extremely distracting activities. He also managed to take my left glove with him. The situation was not improved, not improved at all. A big hand. Thank you very much. This is going to be quick because I'm nervous as hell. Not because I don't speak usually in front of hundreds of people, but because I don't usually prepare slides in five minutes. My name is Juan Luis Cano. I'm the chair of the Python Spain non-profit and we organize the Python in Spain. But before getting into detail, let's put some context here. We have a lot of information about the country that you are now so far this week. And if we zoom to the southeast of the country, you can see this white thing over here, which is the only construction of made by humans that is visible from space. And it's actually one of the biggest greenhouse cultivation in the world. Even more, we can see this Fort Bravo scenery that was used a lot in the 70s and 80s to film some spaghetti westerns. And if you zoom a little more, you can see Clint Eastwood over there when he was young. Well, we are celebrating the Pycon S this year in Almería, which is the city in the southeast that I was talking to you before. This is going to be our fourth edition and we are emigrating on 400 in this. And what are you going to get if you come to the Pycon S? Well, you're going to get to see a very beautiful city with a lot of Arabic heritage, lots of very beautiful monuments, and also one of the most beautiful beaches in Spain. We even celebrated it in October instead of November, so you can relax a bit and go to the beach because it's going to be very good weather still. This is the other thing that you're going to get because the food there is very, very good. Maybe not as sophisticated as here in the Basque country, but I can assure you that the quantity is going to... Yes. This is called the Tapas thing here in the south. And you also get to know the wonderful Spanish community. The Spanish community is so friendly and we've been celebrating this Pycon S as I told you for four years already. And it's always amazing to get to know these amazing people. This is a picture of the third edition which was held last year in Valencia, again like 400 people or so. And this is the group picture at the end of the second edition in Thragotha. So the card for papers is still open, so Clint Eastwood wants you to come. Thank you very much. All right, thank you. Andrea, you're next. Then David McIver and then Anselm Lingnau. Anselm, are you here? Anselm, yeah. Okay, very good. How's our timing? We've got ten minutes. Quick announcement, if you haven't picked up your ticket yet for the social dinner and you want to go, you have to pick it up before 6.15. So the lightning talks are not going to run over, we'll finish at 6, you've then got 15 minutes to pick up your ticket from the registration desk after which it's closed, and then you can't go to the social event. There you go. I think that might be booing. It would be this guy on his motorbike. His attacks were continuing and now I could not reach him. I was startled to say the least. The combination of the force of the throw, only having one hand, the throttle hand on the handlebars, and my jerking back unfortunately put a healthy twist through my right hand and into the throttle. Now a healthy twist on the throttle of a Victory Cross County talk can have only one result. Talk. That is what the Victory Cross County tour is made for and she is very, very good at it. The engine roared and the front wheel left the pavement. The squirrel screamed in anger. André! Hi everyone. I'm André Crotti and I work for a company in London called Iwaka. And one of the projects I worked in the last few months is a migration of a big code base from a Django project from my SQL 2 podcast. So I just want to share with you a little bit of what we encountered and what were the problems and try to help you not to do the same mistakes. So why first? Well, I think this image is amazing. That's enough information. I didn't... I don't take credit for that but I think it's great. The reality is, well, originally the beginning of why they chose my SQL was kind of a coincidence because they had to go live and the only thing installed on the CTO machine was my SQL. So they decided to go for my SQL. So after a few years we wanted already to switch to Postgres but then we found out of an actual real problem we had of some query which we need to do all the time everywhere. Which involves a self-join and some other ugly stuff with my SQL and has to be more or less done in the whole SQL or with some crazy hack with the ORM. And however Postgres has this nice thing called this thing on which my SQL doesn't have which allows you to write this query in a very simple non-nested thing which can also be written in the ORM in just one line. So amazing. So that was kind of the final reason to do the actual switch. So how you actually do that, well, all you have to do is to change the settings. The data will be moved automatically by Django. And then... That's it. Well, not really. I think that didn't work. So just a few numbers to understand how big the project was. It's like a big project. So 190,000 lines of Python code, more than 100 apps, 383 tables. That's great. And at the moment we have around 3,000 tests which takes quite fast but there are a lot of tests. So the plan we are in progress actually is not finished but almost there is to first that adapt all the code then actually do the data migration itself and then we're done. Data migration actually was not as easy as I thought because I found this project called PG Loader which is great. It's fast. It's smart and everything. The only problem is that you don't really get any live replication of the data so you kind of have to do the switch in one go which is kind of scary. And that's not a very nice thing. But yeah, that means we have to try it many times first to be sure it works and only when we are sure everything is fine we have to do it in one go. PG Loader however is really a nice tool and we also had great communication with the author that helped us a lot and also managed to convince the company to sponsor him to do some features we needed. So that's also the thing. Another problem we had was that we have very big tables and just a few of them and we actually don't need to port them at all. We will move them somewhere else completely. So to do that to kind of handle the situation we just drop the foreign keys, change the queries and then do a database router to just keep all these things where they are now and we will handle them later. So that's another option you can use. The changes in the code. First of all get everything running on your CI like Jenkins or whatever, make sure that the new code doesn't break postcards anymore, look for all the places where there are row queries that you want to fix and test them and then you have to do a lot of manual testing maybe and check more things. So one thing which I had to do for example since we had a lot of APIs which were not tested at all, I had to erode like an IPython notebook to run three parallel Django instances connecting to three different servers on different configurations and then with this notebook connect to all of them and then do the comparison of the results doing some approximation of the floating point numbers we get back. So there was a lot of work but that's mostly because there were no tests for this thing. So yeah. So that's these are just the tips of the things you probably should do really use migrations for everything that's luckily we do that we don't have anything in the database schema which is not in immigration and that helps a lot. Test all your code and all your queries because that's yeah you will make life a lot easier when you have to do anything like that. Never rely on implicit ordering of databases. That's really a bad thing but because my SQL always re-orders on ID anyway everything is fine but postcards doesn't do that. And then try to make your Django apps independent from each other so they don't like import models all over the place and this all like a tangled mess and then split your monolith as soon as possible because that would really help with anything. Yeah conclusions this is a sentence I stole from this morning. Yeah and that's it. Thank you. Thank you Andrea. Our final talk today will be by Dave McIver. David McIver that's the next one anybody who did sign up and didn't quite make the slot today. Please come and sign up again tomorrow if you're keen you'll get there super early and then we'll know that you desperately wanted to do your lightning talk. That's about that. Now where were we. We had a man there we had a man screaming the squirrel screamed in anger the victory cross county tour screamed in ecstasy I screamed in well I just plain screamed. Now picture a man on the huge sunset red touring bike dressed in jeans a slightly squirrel torn t-shirt wearing only one leather glove and roaring at maybe 50 miles an hour and rapidly accelerating down a quiet residential street. I'm not a squirrel sorry. I'm David McIver I just want to tell you quickly about the testing library that I write called hypothesis I'm afraid you've already missed the training course that was yesterday morning but hopefully I can intrigue you enough to check it out anyway. The basic idea of hypothesis is that it makes your tests smarter and less work to write mostly less work to write by adding a source of generated data to them so that it can try probing your test code with a whole bunch of different values and find the one that you inevitably forgot because we already have tests when tired. The this is what it looks like it's a simple decorator library it works with any testing framework you like it's not a test runner I use pie dot test it works with unit test it works with no whatever. And this is an example from when we were testing some material code. The curial represents a bunch of stuff internally as utf a b which is a way of for some reason taking arbitrary binary data and turning it into valid unit code. I don't know how widely that you use this one is I want to claim that this is a great bug because mercurial has been in production and widely is for about 10 years but I suspect the reality is that this is a net and a relatively small corner that. This is fixed now by the way we found a bunch more bugs once they in the same code once they fix this one. It's also found bugs in the Python standard library. This is from the new statistics module in Python three one of the many nice things you get when you finally get around to switching to Python three and even works now. It didn't. It's to be fair this is a horrible thing to do to any statistics module. It's in this case it was calculating the mean and it doesn't work well with very large numbers. It works with most of the things we're going to want to use this is a test using numpy this one's not actually about it's just floating point numbers being awful again. And. And so if you're doing scientific Python or anything with numpy it will slot right into your testing. I'm sure you're testing your scientific code. It works with Django too. It works really well and go it will automatically take a look at your models and just go oh you need these types I know how to generate those here let me generate a model for you from scratch. In this example we're overriding a bunch of things but you don't have to do that the out of the box generation is pretty good. This one won't make sense without the full example but basically one of in this case it tries to add the same user to a project twice and the code didn't expect that. This is one I don't have time to go into properly but one of the really cool hypothesis features is that you can give it a complete model of your API and tell it what should work. And it generates random programs against your API and eventually finds one that breaks. This one's a bit experimental and slightly harder to use than the rest but it's pretty cool when it does the right but know when it works for you. That's more or less all I'm going to say I hope you check it out to go to the website is hypothesis.works and it's got a whole bunch of introductory articles. It really is quite straightforward to get started and we've got a good community and if I'm not around they always have to answer questions as well. Thanks again. Thank you very much everyone that is us finished actually properly on time thanks to each and every one of you for coming to the lightning talks for proposing lightning talks for being speakers for paying the registration fees for being here for free for going out in the evening for having dinner for being lunch. See you all tomorrow. Let's have another several days of conferencing. Everything is going to be brilliant. Hooray good night. Oh and I'll post the link to that squirrel story which I was about 20% of the way through. It's going to be better read on its own time.
Various speakers - Lightning Talks Lightning talks, presented by Harry Percival
10.5446/21081 (DOI)
Hello, everyone. I'm happy to introduce Adam Dengueur. He's a Python developer coming from Bristol. And he will present a talk about another pair of eyes reviewing code well. Thank you. Thank you, everyone. Hello, I'm Adam. And I'm going to talk about code review. But why am I talking about code review? Well, the inspiration for this talk is really a bunch of people I used to work with who are mostly contributors to the Twisted project. And I was really lucky. I got to experience the really rigorous approach to review that's also present in Twisted. And I feel that I've become a much better programmer thanks to that. And I've also had some really bad code review experiences, the kind of experiences that make you not want to go into work the next day. And I never, ever want to make someone feel like that. But when it's your role to judge someone else's work, it's really easy to upset them if you're not careful. And I guess that some of you have experienced with code review maybe even most because it's kind of coming a fundamental part of the software engineering process. But maybe because of that, it's often seen as boring grunt work and not a skill that you can hone and get better at. But at my job, I spend almost a third as much time reviewing code as I do writing it. But when I go to programming forums or Twitter or read blog posts about how to become a better programmer, I see very, very little about how to become a better reviewer. And when a programmer starts out, they're often paired with a mentor to teach them how to code or given kind of like easy fix bugs that they can get trained on. But we're really expected to jump in at the deep end and know how to review code without ever having been taught. So today, I'm going to talk about these points, what code review is, how I review code, some common pitfalls and traps that people fall into when they're reviewing, tools which can help make your life easier, and how code review can make you a better programmer. So what is code review? It's a practice where someone looks over someone else's code and they look for things which can be improved. And we usually do this before merging code because we want to catch bugs as early as possible. Yes, we've got automated tests. But code review can sometimes be the final gatekeeper before code hits the real world. And we also use it to catch non-bug defects, which can harm the project, such as performance issues or maintainability problems or something like that. But code review isn't just about making the one change set that we're reviewing better. It's about making all of the software that we ship better. And it's important to think about how we can use code review to improve ourselves and the contributors that we work with. And I saw a talk last year at the DOX conference by Paul Rowland, who I think is also at EuroPython. And he was talking about how we should stop trying to be code ninjas and rock stars or all those other kind of things that you see on job posts. And we should start trying to be code nurses and code tree surgeons. And I kind of see the reviewers role as just like that. It's not as thrilling as the ninja feature author shipping something at midnight, but it's really important to the health of the project. And all of this happens within the context of the project's goals, which usually includes shipping code fast, or at least fast enough. So let's back up and go over quickly how we get to the review stage. And in the last few companies I've worked at, it goes something like this. Eleanor, our programmer, is new to the project. She's brand new, and she's looking to make her first contribution. So she picks a task from the task tracker. I'm using GitHub here, but it could be Gira, Trello, whatever you use. And this one is written by me. The name of the user's latest employer needs to be shown in our app's new prototype feature, user profiles. So she creates a branch so that she can make changes to the code without worrying about breaking it. And using branches for development is a great way to create a low risk environment for people to experiment in. And it lets a reviewer easily look at the proposed change, just the difference between what's been written and what's currently there. And so Eleanor starts searching around the code, and she finds that there's already a method to get date sorted employment history. Great. And there's already a function that you can see here that returns a dictionary for a user's profile that's later handled by a view layer and shown in the app. So she adds some new code that she thinks might work. You can see those new two lines. And it takes the date sorted employment history, and it displays the first item of that list in a new field called Latest Employer. And then she runs the application locally, and she checks her own profile, and it shows her Latest Employer, Bill Bowink. So we know that this works. Eleanor has seen that it works. Do we need to spend time reviewing it? Well, in my opinion, all code that's written as part of a team should be reviewed. I've heard objections to this, particularly when the change set is trivial or if it's absolutely urgent. And I'm going to propose that very, very little of what we do is actually trivial. Programming is very hard, at least for me. And authors have an inherent bias towards thinking that their code is good. Even if we waste a few minutes reviewing a change that really is trivial, we save hours later, debugging the bug that was shipped when someone thought their code was really simple, but it actually had a typo. And even if we're absolutely certain that the code can't break, a review means that at least two people know where this new code is. And if the code is really urgent to ship, and there really isn't someone around to review it, maybe it is worth merging. Maybe you've shipped a data-destroying bug, and you really need to revert it right now. But at the very least, I'd recommend that you do a post-merge review, whatever that is. I won't go into it. So back to Alana. She submits her code for review. And she's nervous, because it's the first contribution to the project that she's made. But of course, she doesn't mention that, and she puts the code up for review. So now we've got to decide who does the review. Who gets permission to merge code is a whole problem that I won't go into, but it's got to be someone with that permission. But at least where I've worked, anyone can do it. Anyone can review code and merge it, and it's just like a trust-based system. And as I mentioned, as programmers, we're biased towards thinking that our code is working. But we're also biased towards thinking that our code is readable. Our comments are readable. Our variable names are understandable. So the ideal reviewer is someone who has had no part in writing the code. And if we choose someone who's recently worked on the changed files, then they're likely to have really great context, which will help them find a high number of bugs. And according to a Microsoft study that I read, the most useful review comments, so I guess the most bugs found in most cases, come from people who've actually previously reviewed the modified files. But when we choose people who are recent authors or recent reviewers, they're conflicts with our goal to share knowledge with the team. And that might be OK, but it's at least something to consider. But a good workaround for that problem is when you allow developers who are unfamiliar with the code to do a review, but you don't designate them as the gatekeeper. You have someone else have to say, this looks good to me. Let's merge it. In fact, when I join a team, I love to jump straight into reviewing as much code as possible. And when there's no pressure of knowing that I have to take responsibility, I'm not the one saying it has to look good. Someone else will do that. Then it's a great way to learn how things are done. It teaches you, let's say, about a project's quality bar and standards and how the process works from writing code all the way through to shipping it. So say I want to review Eleanor's changes. This is how I'll go about it. I start by reading the spec. Without reading the spec, there's no way that I can know if the code meets all the requirements. Hopefully, the spec's in an issue tracker. And it has descriptions and examples of how the new functionality might be used or how to reproduce the bug if there's a bug that we're fixing. And then I check the CI, whatever that is, whatever tool you use. And I hope that it's passing, but sometimes we know that CI isn't always passing. So at least I check that there's no new issues. And CI really should be a signal of whether something that was already there is broken. So it's good to check that first. And then next, I look for evidence that the spec's been implemented. And ideally, that means looking at the test changes. If there's a bug to be fixed, I want to see that there's a passing test which would have failed if the bug were present. And if there's a new feature, then I want to see tests that cover the whole spec. And if it seems like the spec is ambiguous, and I've interpreted it differently to the author, then this is a good time to start a discussion about what we want. That could be with the author or whoever's responsible for shipping the product, maybe the product owner or project manager, whoever it is in your organization. And then I look at the implementation. I think about whether it has functionality that isn't tested. And that's important because even if the code works now, we want it to be safe from regressions. And I kind of think about which missing test cases could convince me of that. And then I think about whether there's any risk that the changes could break existing functionality. Yes, we've got CI, but it's not always perfect. And it's often a good thing to think about. And then I think about, is the new code going to be easy to maintain? And that can range from things like other variable names understandable. But also, I have to think about what the structure of the code is. Is it really branchy? Does it have lots of side effects? Is it idempotent? Is new code in the place that I'd expect to find it? And these are all the things that you think about when you're writing the code. So you get a lot of the advantages of pair programming, but it's asynchronous. And then I check that everything user facing is documented. That might not be relevant in your project, but it is in mine. And also, at the same time, while I'm reviewing the code, I'm looking out for new techniques and patterns and tools that I don't know yet. And it's always really nice to say thanks to the author when you learn something from reading their code. And then I'll write a comment about anything that might be improved. And I'll ask questions and start a discussion about anything that I'm unsure about. And at the end, I'll give a conclusion. And usually, it's one of these options. I say, great, it looks good to me. I'll merge it. Or please make the requested changes. And then you can merge it. Or make the requested changes. And then resubmit for another review. And then sometimes in the rare case, I've picked up a review that I'm not qualified for. And I ask that someone else looks at it. And I'd say that the first step to getting better at review and all those previous steps I mentioned is to take it seriously, like we often do the rest of our craft. If you learn about software engineering practices from blog posts, then search for blog posts. If you learn from formal studies, then look for formal studies about code review. And importantly, ask for feedback on your reviews, just like you ask for feedback on your code. So I look at Eleanor's new code. And I immediately spot a problem. The Twitter handle feed just above is title case. But the new field is sentence case. And so I know that the pull request is not up to scratch. And I comment and reassign the issue to Eleanor. So what are her next steps? Well, she sees my comment and she kicks herself for making such a trivial mistake. Why can't I do anything right, she thinks. Maybe I'm not good enough to be here. And then she searches the code base for latest employer, the name of the new field. And she sees that it's written just like she wrote it. Everywhere else it comes up in the code. Beginners don't have the confidence to protect themselves from feeling hurt by code review. And some experienced developers don't either. So what we should do as a community is be wary of imposter syndrome and use code review as a forum to make this industry a nicer place. But you don't want to let bad things be merged just to avoid hurting someone's feelings. One tip is to always say one nice thing about the code. And when someone's made an effort, there's always something nice you can say. And it's good to always address the patch and not the person. At a previous company, I was lucky enough to be thrown in at the deep end of a project. And that project had quite a steep learning curve. But it took me some time to get into the mindset that my code was being reviewed and not me personally. But what else could be better with this review? Well, Eleanor doesn't know what to do next, if you remember that she was searching around the code base. And one of our goals, the project's goals, is to ship code quickly. So by not making it clear exactly what she's got to do next, I've slowed down the process, and I've added unnecessary work for her. And also, you might spot that there's a bug a few characters later. Basically, if that list is empty, get employers, then I'm going to get an index error when I try to access the first item. But I stopped my review when I saw the first problem. And that means there's got to be a whole extra round of review which delays our shipping the code. And of course, sometimes there's a trade-off when you're thinking about when to stop. If a patch is really large and based on a completely wrong assumption, then scrutinizing every line probably isn't worthwhile. So after a back and forth and a comment about the index error and so on, Eleanor resubmits this code. And I'd say that's a code review success. We stopped a bug getting into production. It's great. And so I start my next round of review and I comment that a try accept clause has performance overhead that we don't want. And the message line is greater than 80 characters, and that violates PAP 8. And finally, I mentioned that using a getter for the list of employers to remember that function she found and used isn't very nice. And she should use a property instead. And on the surface, these might seem like reasonable comments. But it really is not a great review. Because as a reviewer, I need to consider the context. And I need to consider what's important right now. Sometimes it's security. Sometimes it's maintainability. Sometimes it's speed. Who knows? And this right here is prototype code if we remember the spec. So there's no need for me to be overly concerned with tiny, tiny performance issues. So I've kind of wasted some time with that suggestion. And that's for PAP 8. Well, it's a guide. It's not a rule set. And if you want to follow a guide, fine. But if you want rules, then I totally recommend using a tool or set of tools to enforce them. And that's partly because humans are very bad linters. We miss things that tools catch. And it's partly because if we use tools, then we can free ourselves up to do more important things. And Flake 8 is one of my favorite tools. It combines a lot of tools, and it enforces the Flake 8 guidelines, some of them as rules. And in this coverage.py, it checks your line coverage and requires.io, which checks for vulnerabilities, dependencies, and landscape.io, which looks for common code smells and mistakes that people often make. And there are even tests for docs. If you're using restricted text, there's Doc 8, which lints your documentation. And there's Sphinx's link checker and spell checker. And there are probably similar checkers for markdown and ASCII doc and whatever tool chain you use. But sometimes these tools will just change that are actually worse. My personal preference is just to always just follow the tool suggestions. And overall, your code is probably going to be nicer. And you save the cognitive burden of linting by eye. But if you really want, most of these tools allow you to ignore a particular issue. Like here I've told Pylon, don't worry that the function name is too long. And if a suggestion keeps coming up in your project and your team don't like it, you can just disable some of the checks in most of the tools or change them. And if you use GitHub, then you can integrate these tools with pull request. So you can actually make it so that code can't be merged if the tools find problems. And finally, reviewable is a really powerful tool, which has great features that GitHub doesn't have. Like say you're doing a review, you can say, I just want to see the changes since I last did a review. So if we go back to my review, we see a suggestion of an unrelated change, changing an existing getter to a property. Now if you want an unrelated change, it's probably best to just make an issue in your issue tracker and deal with it later. A review might be a great place to spot required unrelated changes, but not to request them. Because asking for unrelated changes slows down shipping the code and also makes the next review round less pleasant. In fact, as a reviewer, I should request easily reviewable changes. But what are reviewable changes? Well, statistically, more defects are found when change sets are shorter. And as a reviewer, you can ask that a patch is split up if it's difficult to review. And you might be able to help decide where the splits should be. So say someone refactored some code before adding to it, you might ask that the refactoring is in its own separate pull request. And when you spend a lot of time reviewing code, you start to get a good idea of what's going to be easy to review, and you'll start to write code like that. And writing easy to review code means reviewers can be more effective and find more bugs so your software is better. And when we focus on making reviewers life easier, we write code that can be extended without changing what's there too much, otherwise known as maintainable code. And a trivial example of this is the Silicon Valley Comer. If I add a comma at the end of that line, then when I add a new item, I only have to see when I add a comma at the end of the silica, when I add a new item, I get to see just one new item's changed. Great. And the same for docs, but I'll skip it because we're running low on time. And I'm going to say that the best reviews aren't just laser focused on providing the bare minimum needed to ship, but they also don't add unnecessary blockers to merging. And when you explicitly state that a suggested improvement is optional, and you can let the author choose whether, let's say, to learn a suggested tool or to just skip it and ship now, then that can be really useful. And those priorities can be adjusted depending on the context. And let's say if you've got a brand new developer to the project, you're onboarding someone, well, you can be super piggy and get them in sync with the rest of the team, or you can be easy because you want them to feel happy and get started quickly. And so every review comes with a mini, how would I have done this? When we read someone else's approach to the answer to that question, and we notice that their approach is better, or at least better in some way, then we learn techniques and patterns from them. And I find that teams which value a review get really much better reviews back, and they get reviews done more quickly. And my favorite example of this is Twisted's high scoreboard. So you get a score every month, and you get some points for submitting, but you actually get the most points for doing a review. Thank you very much for having me. I haven't got to put them up yet, but I'll put my slides up there, and I'll probably tweet about it or something, hopefully, when I link later. So are there any questions from the room? We have a few minutes for questions. I have just a quick one. What's that score thing that you showed in the end? That's Twisted. So Twisted is an open source project. It's like a networking framework, and they want to encourage people to contribute. So just a bit of fun. They've got a high scoreboard. And who's been the best contributor this month, or the most prolific contributor this month? So you get like 100 points or something. If you submit some code, 500 points if you close a ticket, but you actually get 1,000 points if you do a review, because they have a backlog of code that's been submitted and not reviewed. And so there's no point having new code if the code that's already been submitted isn't merged. Hi. Great talk. So when you review a code, do you also suggest to check out it and test it, like if it's more complex? Or is that not part of your job? How do you stand on this? So I don't always do it. Sometimes the tests hopefully tell me enough. If there are no tests, sometimes there's a good reason that code isn't tested. Let's say it's prototype code we're going to throw it away, or maybe it interacts with an external service, and it would just be really costly to build a fake for that service. What I like to have is a reproducible test, manual test case, let's say, if it's Elasticsearch. Connect to Elasticsearch, check this in the web interface after you run my code, or whatever. And I like to get my editor out. I like to get my web browser out. Run it and check that that works. And also, if I've got a suggestion, and I'm not like 100% sure about it, I like to modify the code, see if my suggestion might work, at least in a prototype stage, or see if I can break the tests by modifying the implementation, or make the test still pass by modifying the implementation in a way that the functionality would break. And then I found a flaw in the test. So I do get my editor out. Thanks for the great talk. I have a question regarding the size of reviews, how to handle it, because if you have a feature that needs to be implemented, but the change that would be too large, how do you handle a case like that? Do you make earlier pull requests with only a partial implementation of the feature? What would you suggest? So I like to always keep master or trunk or whatever you call your main branch working. And so one way that people do it is you make a branch for the whole feature, and then that's just empty. It's like master. And then you make branches off that, which have all the partial features, and you put those branches into that first branch I mentioned. I should have given them names. And then eventually, when you're happy with it all, you've got this huge diff, and you merge that into master. Hi, Mikan. What about time spent? What do you suggest? You can go really in deep, like reviewing every line, texting or every line. Cool. So it depends on a few things. One, I never want to be tired in a review. As soon as I'm tired, maybe like 45 minutes in, my review is useless. I'm going to miss bugs and stuff. But as for scrutiny, it really depends on the context again. If I'm reviewing something right at the core of my project, it needs to be great. I'll spend as much time as needed. Or if it's, let's say, an API, that's going to be really tough to change later, because users are going to depend on it, then I'll also be really, really deep. If it's something in the middle, or like I mentioned before, prototype code or something like that, then I can say more trust. OK, I've opened this up in a web browser, and it all seems to work, so fine. Or at least something a bit closer to that. And like I said at the end, if it's a beginner programmer on your team, then maybe you want to be really picky just to help them along, or something like that. It depends on the person as well. Last question. Oh, sorry. So imagine you have several issues, and all those issues can be resolved in maybe one or two lines of code. Do you favor several commits, or just one commit? So different projects do it in different ways. Some people like to review the commits. It's quite nice when you've got a set of commits to be able to just revert one. Or to see a history, why did that come up? That's the advantage of commits, but usually in my reviews, at least where I work now, I don't deal with commits. So that's a different practice. Lots of people do, but I don't. So I'm sorry I don't have a great answer for that. So we are running out of time. Thank you very much for your insights. Thank you. Thank you all.
Adam Dangoor - Another pair of eyes: Reviewing code well Many of us have been taught to code, but we know that software engineering is so much more than that. Programmers can spend 5-6 hours per week on code review, but doing that is almost ignored as a skill, and instead it is often treated as a rote chore. How many of us have seen poor reviews - those which upset people, don't catch bugs or block important features being merged? This talk explores the social and technical impacts of various code review practices as well as helpful tooling. The goal is to provide a structure to help improve how teams review code, and to introduce the costs and benefits of code review to anyone unfamiliar with the practice. There are always trade-offs to be made - e.g. think how costly a security flaw in this code could be to your organisation - perhaps intense scrutiny is not necessary for prototypes soon to be thrown away. It is useful to consider the trade-offs in order to optimise for a particular problem domain. Perhaps right now it is more important to look for issues with maintainability, functionality or performance. I talk about how some fantastic code reviews from mentors, colleagues and strangers have helped me become a better programmer and team member, as well as occasions where code review has been detrimental by slowing things down and causing arguments. This is aimed at everyone from beginner to advanced programmers.
10.5446/21082 (DOI)
organizing teams and privately I'm a cyclist I make bikes I repair bikes I ride bikes I make photography and also I'm an ultimate free ZB player if you don't know that game this is something like Python among the programming languages the same thing is about ultimate free ZB in team sports so check it out it's a great thing to do so the talk is going to be about how I already discovered the scriptors and I did it when I was working on tree structures the the tree structure that's basically you have you have the parent note you have some fields inside so maybe other way so you have many notes one can be put inside another then we call this no the field of of the parent note and you can also put value inside a field so the basic tree structure is like in Django you know the models in Django you know Django so so you have fields in there this is one level tree and what I discovered when I looked when I was watching the presentation by my friend is that I think many people have seen the descriptors have even heard about where they are used but they didn't see the obvious way to use it in other places or to override the default behavior so have you heard anything about descriptors can you raise your hand yeah and have you ever overridden the default behavior that's what I thought and so I'm gonna start with introducing the project to give you some context I'm gonna tell you about fighting the legacy code and how I when I got the opportunity I have rewritten the tree structure that was used over there and what I learned from that so there's gonna be a legacy code ahead first the project project was a point positioning system it means we have a mobile phone which sends the GPS position along with what it sees in the radio so the what base stations of this cell network it sees and we process does we calculate the positions of the radio stations we gather this data and then when the another mobile phone the one down sees only the net the cell network stations we can give it an approximate position and this is used in an assisted GPS when you because otherwise the phone would take like 10 minutes to lock onto the satellite because it doesn't know anything about the satellites and the receivers are cheap inside mobile phones and that was the C++ project and we had a test project alongside I was written by a C++ programmer and this project basically replaced the mobile phones and also had access to database so we were sending binary data we were comparing the responses against what we want to see and we are also checking within the database that the positions are updated in the right way so that's basically it and after the code so there were a few problems with the code we had start imports because the C++ programmer didn't know another way to deal with the modules he wanted everything basically in the same namespace so every package was a name says of itself and it was importing everything from the sub modules themselves that's the thing we didn't deal with because the project was used in other areas also and we had a few things that I think stem out of not having introspection in C++ in such an extent as in Python so we have a repeated names as you can see here there are slots that protect us from assigning to the wrong name we have an initialization where we call the the base element and we tell okay so under these names we want such fields and we also give those names of the fields again to the fields themselves so that's a bit of a hassle then again in the top-level structure we have all the functions defined in such way that they are manually looking at the tree below and they are doing the printing the serialization the compro the comparing to other objects and that doesn't have to be like this it can be automated because we can know everything about this data structure but there is also something very peculiar and which I would like you direct your attention to this is the Mortal Kombat guy over here to show you this is the call function and the call function was used to store the value inside a field so instead of assigning where you would lose the field and instead had a had a different you would put a this simple object inside this attribute we have we are calling the field and this way we put the value inside a field and the field is still usable with all these methods so what happens when you have a structure like this so you have to be very careful with references and so having variables to collect some parts of data and then constructing it it's it's rather risky so all the test definitions were done in long chains of attribute access where the lines were copied by something it's not very usable also we had another ingesting class if from C++ where we used the static methods within the class so we are basically making a singleton out of a module which already is a singleton so this is another thing it's funny here we have some special functions that deal with some of the data like the set version we have those calls as you can see where the cut is the second cut that's actually set the data and so and that's about it so when you think about a such structure and what is your goal when you're doing such test project I thought my goal was actually to provide for the testers so they are the guys that are that are needing my work and what I wanted to do is give them easy to use trees so that they can easily define the data and that's how I imagined it so so I would not have this repetition I would have clear structure I would know which element goes where and I want to look at it because those three structures were used everywhere they were using server configuration they were used in difference reporting they were used in fitting data they were using fitting database so so when you look at it this definition looks very much like an assignment and so I thought this is the right way to go and I thought well I'd like to have something like Django where you have those fields where you can assign values but you can still use them and call method on them so I tried to make it like that and the first step was to use the keyword arguments and so I added those keyword arguments into the base element and I used them everywhere in the hierarchy below so we have a these ABC element which inherits from base element and I have keywords there and the second thing I did is I actually used the printing to give the differences so I use the def library to give the reporting to the desktop and this way you didn't you didn't have to walk the tree for for difference reporting but again this mortal combat guy shows you the place where there is something interesting happening so in the unit we call the get art on the item and then we call it with the value to set up the value and so we have this cooperation between the call in the parent which is the unit one and the call in the child which is the call one which intercepts the value and puts it inside and so that was the first thing and pretty soon the another protocol came and I didn't have to deal with the rest especially because other testers were also using it not only my team so what I left there was the start imports I read the repetition when defining the data I left the recalibration because it was using some C plug-in and I once said because the other projects are we're using it yeah so next time I had a chance I took the opportunity to actually implement those data structures and you because we had a new protocol to implement and and again I was looking at Django and I thought like maybe not only the the initialization can be dealt with but also like assigning the values to the attributes so when I assign I want to the value to get into the field and I want the field to be usable still and so the first straightforward way as I seen it was to use the set other which which I in which I have overridden the default behavior and I redirected the assignment to the assign method and so in this place I use so the set utter is used in the parent when you access the attribute and then this assignment is redirected to the child and this child is dealing with putting this attribute in the right place in the field and also putting the other fields that can be in the subtree so we are I actually had this in two separate classes but I couldn't fit that in here so this class is basically both the parent and the node it so each node has this ability to intercept the assignment and each node has this ability to take care of it when it's a child and so we are taking care of the value and we are taking care of the subfields that are in the in the in the child and another thing I took from Django I have stolen it it was the creation counter which let me get rid of the repeating the names in the slots so we have a creation counter we copy it from the class into the instance and this way we can easily sort the the stuff because as you go when you define when you call those classes interest instantiate those fields they each get a creation counter in the sequence like like the lines go in the file and so it was almost there we it happened to be that actually I invented something like a data descriptors and but that the descriptors just go in a different way so I have already written the set at magic method and I called a sign and what the descriptors that the descriptors do they actually have this mechanism within the get attribute and they use set for setting the value inside the child so those those two being set at ring and assign change the functionality and I put them instead there is a get attribute which would call our set and the set is in in place of our assigned so so this is the transformation that happened and basically I'm using the data descriptors and what I think this doesn't seem scary I think people get get scared because they see this cold tree and think oh my god this must be complicated I don't know why is it here and this is complicated because this is actually few things mixed up together there is inheritance here there is descriptors and there is data descriptors so the ones in red and again I have those those two matching pairs marked pink so there is a parent call and there is a child call and the red ones are the data descriptors so we are basically checking within the class because we have to do it before we access the in the field in of the instance and if there is a get and set we call set and the set is doing our work and now the green part is basically an inheritance change so we take care that we take stuff from the instance and in case that it's not in the instance we take care to take it out of the class and if we take it from there we check again if there is a get method so that we can override the behavior and at the end the white one is a fallback which you can use and which is also quite fine to to define something like a dictionary with attributes when there is an attribute missing you just you can override the behavior when what happens when when you get no such attribute and in the in the default it's that's attribute error that you get and so after you have done all of this and then you make an a structure all inheriting from this base element you can do things like put some fields in there to give you the ranges for the validation put some fields to give you default values and all this is safe from writing when you are when you are defining data so this is a type definition I was I had actually two kinds of the first split was between the structure and the atom so the structure was dealing with copying the fields the atom was dealing with copying the value within the field so so the structure was basically like a subtree the the atom was like like a single element within the structure also at least and then you simply define the data say something about the max value and when you have a you in then for example you inherit in from in and at the minimum value of zero and so on and so on and this becomes more readable but the most important thing is that you actually get this kind of use that let's look at the down the lower part and you have this ABC element which has a version and this has a minor field so we want to change this minor field and we don't have to call it so it doesn't look strange but you are when you assign to it you still can validate it you can serialize it and this way you move all the logic down with the structure so you don't care about what this particular field is doing we're in respect to serializing and then the field above is doing is calling this field to get the serialized data and then construct the bigger parts so this way you can spread the functionality and actually in most cases you don't have to write it again in in the fields in those small items over there and so the descriptors are basically that they are overriding the access to the attributes and this gives us the possibility to to put it somewhere else to put the thing that we are calling it with like the assignment somewhere else and then still use the magic field that has some properties and methods to be called and as a bonus in this project it gave it the structure started looking so easy that I was actually able to redefine the printing in such way that I was able to print those structures onto the screen so from the previous situation on the right where we had a different way of printing and a different way of defining that we came to the situation that the printing and defining was basically the same and so we could do regression testings within the project and the testers got a really big performance boost on that because they were able to copy the structure from the screen and just put it in place modify some things and and test regression regressively so and that's about it so I think the moral of this story is that you can prove anything with a country's example and I like how it's how Joe has Polsky put it and I also do is seen it in the morning you know on some other talk that there is this Pareto principle and I wanted to say that too that you should actually when you are doing something you should choose the most bang for the buck and try to not get into rewriting stuff until you get to new features because it doesn't bring business value and and as I did the I did the new solutions on new features so if it's broken don't fix it do you have any questions I think I rushed through it yeah I think I still have time so so the story is first I I took care of the low hanging fruit and I put the keyword arguments in there to initialize the value and then I realized that when you look at the data it looks pretty much like an assignment and I wanted this assignment to not only be used in in the initialization but also to be able to assign again to that data structure so when constructing the data structure I could have some definition but then I could copy the structure and then assign again and modify it for some other test so have some basic template and modified it modified later and this led me to try to override the assignment which is basically attribute access plus the assignment and I try to do that with the setup and assign and this works but then I realized that actually those things are already there so the how I think about it I think this trick the script or API data descriptor API is mostly about this kind of thing where you try to take over at the moment where somebody is assigning to something then take this data put it inside the field or a note and then still have this magic note that does all the work for you but you have the ability to write the code like like that where you assign to something and then call the the stuff that is needed for you and I think it led me to thinking that the scripters are not very much the big call graph over here but this is only the red part which checks if the methods are there and calls them if they are there so this is the default behavior of the get attribute stuff get attribute method in Python and I thought I'd like to share it no I just called those sub trees which contained another other fields which could be also subtree struct so I built a class hierarchy so I didn't have to re-implement the stuff that would be common for example to integers to numbers to willians and so this is these are the items and for the structures I am white by that I mean lists trees and so here I'm actually showing something that is a there's a mix of the two because I'm copying over here I'm copying value that's what the item does and I'm copying fields because I didn't have space and I didn't want to complicate it so I wanted to say that there is a note you can put the value in it but you can also have subfields and those subfields can all again be a note you can put a value there you can be put subfields and this way you build a tree from that as I said I think it had a big impact on the performance of the team by by means of this one where they could actually copy paste the data from the screen and I think that was the biggest thing from the other stuff I had also changed the comparator so we had two structures normally it was walking down the tree and trying to compare each of them and then said okay this is different from that one but maybe even didn't give the value so I changed it to such way that I actually use the printer I printed the both structures and then put a deep on that and removed all the non differing lines from the deep which I don't show there actually and this is this is a deep I copied from the from the code so some these were I think the two biggest improvements the other improvement was about the configuration but this was a bit a bit different tree so so I think it's not related well I no longer work at the company so right now I'm doing web development other stakes and this is a much better line of work and I think we have refined process with a lot of we have almost 100% coverage we have the ends to run the stuff we have a working scrum and I think I'm not going back over there to fix that stuff anything more so I guess thank you for your
Adrian Dziubek - Python Descriptors for Better Data Structures Have you ever wondered how Django models work? I'll present a story of data structure transformation. I will talk about ideas from Django models that I used and how I rediscovered descriptor API. I will talk about printing, serializing, comparing data structures and some other examples, where descriptors excel at making declarative code easier to write. ----- I worked as a developer of a testing framework for a C++ server. The framework tested binary protocol implemented by the server. Most of the work involved testers preparing test cases. The data format was primitive structures -- hard to read and easy to break. Field order and all the data had to be entered manually. At the time, I have already seen the better world -- the models from Django. Have you ever wondered how those work? Step by step, I used the ideas from there to make the structures more friendly and on my way I rediscovered descriptors. I'll show in incremental steps, how: - used keyword arguments to lower signal to noise ratio, - order of definition for sorting the fields, - realized that `__call__` is used instead of assignment, - used `__setattribute__` as first step to extend primitive fields, - discovered that I'm actually reimplementing descriptors, and how it lead me to: - implement printing in a way that is friendly to regression testing, - use diff library for less code and better results, - implement more readable validation. I want to show how descriptors work in Python and how they enable declarative style of programming. By the end of the talk I want you to understand what is at the core of the magic behind field types used by object relational mappers like Django.
10.5446/21084 (DOI)
Okay, welcome to this talk. The speaker is Alessandro Amici, a good developer, I think. He will explain some stuff about pie tests. So welcome. So this talk is about test-driven code search. And this is a rather new technique, not so new because someone already tried it a few years ago with Java. And the... But it's the first time that I see it applied to Python. The idea is pretty simple. What we want to... What we produced, what we did was a very basic search engine. It's a pie test nodev, it's a pie test plugin that enables you searching for code inside your machine on packages that you have installed on your local machine. The special thing about this, the test-driven search, is that you use a test as part of the search query. So you may use some... Also metadata and try to refine your search, but the core... At the core, what you are looking for is what you describe within a test. We call it a specification test. There is something that tries to specify a behavior or a feature without going too much into the details on how it is implemented. Once you run your search engine, you will get some search results. So this is a list of functions or classes or whatever object, actually, that pass the specification test. The documentation that the main... The core of the tool is the pie test nodev plugin, and there you have the main documentation, but there are a couple of other tools that I will show during the talk. Now, since this is something new, at the beginning I organized this talk to be somehow theoretical, but then I completely rewrote it yesterday because I think really good examples make people understand much faster. How it works. Do people here know pie test and pie test features? Who does... Okay. Now basically the base implementation detail is that the plugin provides a special fixture that's called candidate, and you need to use this feature when you write a test that you want to use to search for code. What will happen is that the fixture will effectively parameterize your test by passing it all the objects that it will manage to find in your environment. So if you install 10 packages in the virtual environment together with pie test, it will collect all the objects, all the live objects in your standard library and all the packages that you installed. Then obviously since this will be a parameterized test, the test will be run every thousandth of time most probably, and once for every object and the object will be passed, a reference to the object will be passed into the candidate variable. So you basically will use this candidate as if it was the function that you are looking for, and then the search engine will just tell you which functions, classes, or objects in general actually appear to behave exactly as you intended. So let's do our first search. You want to search for some kind of a function that has a feature, for example, let's search for a function that even the name of it executable returns the path to it. This is not just a nice example, this is actually the first real case that we have. We had exactly this need, and we started searching for it on the web, we didn't like the results, and we said, okay, no, this is the perfect test case because it's something easy, it's easy to write a test for it, and maybe there is something somewhere in my environment already that does it. So you could just write something like a sub process call to which and then pass the result, et cetera, that would be hacky and it would not work on Windows, so it's not the best. So what is the specification test? I write a standard test function for PyTest. I use the candidate fixture, then I just as to have the test more readable, I basically rename the candidate to which, which is more or less the idea that I'm looking something that works like the which command in the standard library. So then I assert the behavior that I expect. If I ask SH, I want to the function I'm looking for should return bin SH, and if I pass it the string m, it should return usrbinm. These two are two very common UNIX commands, and they are the one among the most tables because a lot of commands can't be in usrbin or in bin or spin or, but these two are the more common. So once I have written this test, I write it to a file and then I just run it as usual with PyTest as usual, just I need to add a candidate from all. This means that the candidate function, the candidate fixture will be parameterized by everything I find in my environment. So this starts a standard test session, and I get usually something like 5,000, 6,000 objects. This depends very much on how many packages you have installed. This is not many, it's easy to go into the 30,000 or 50,000. And then it just runs for a while, we will see in a minute. And since the test is expected to fail, the PyTest will print a small x when the test fails. You are throwing random functions to the test, so you expect it to fail most of the time. Then you have capital X, which means that the test passed in, it was not expected, but it passed. At the end of the run, you have many, many x. What you expect to do is to have a result, and in this case we found three functions, three objects that passed the test. And this is the report. So for my test which file, we found a case in which the test which function passed, and this is the executable. Now I have the test function as well, and let's see how it works, how much time it takes. So right now, I'm not using PyTest, pure PyTest, I'm using kind of a boxed run on PyTest inside the Docker container, because when you throw random arguments to random functions, anything can happen. So if you try to do it on your machine, you will find backup files with crazy names or probably connections to run costs or whatever. So you prefer to do it in Docker, and at the end of the run, you throw away your Docker environment. So what happens is that right now it's collecting all the objects. Now I have a little bit less objects than when I did the test, because I blacklist objects all the time, because they might crush your environment or, I don't know, open up a browser, et cetera. And this is what happens. Now all the tests, that test is running with all the functions. We see small x, and it means that we didn't find n much, but here we have one x. So this is one of the, we find at least one function that actually worked. This takes approximately 60 seconds, and everything goes okay. And now we should also have some garbage on the screen, because since you are using functions and classes in unexpected ways, you're always throwing random stuff to it. Exactly. You end up discovering a lot of bugs in the package that you have, because most of the printouts is exception in the Dell method that are ignored but printed to the standard error. Well, I finished. So this is the, now what happens once you get the result? You say, okay, I have tested past that very easy, very basic test. And what do I do? Well, since I have a manageable number of results, I can just have a look at them and decide if this is really what I want. This is the, sorry, the distutils.pwn.findexecutable. The name looks like what we are interested in. And this is inside the standard library. So it's very useful. Maybe I don't need to write any code for my findexecutable, for my which function, because I may just use this one. You see, that's more or less what I thought. It gets path, then it splits somehow in OS independent way, then it makes, it does some win32 checks that I even didn't think I needed because I don't use windows usually, but yes, might be useful. And then it just tries to see if the file exists inside the path. It's not really the best. I mean, this file, it doesn't check if the file is unexecutable. So not really perfect, but at least I have a template if I want to improve on that. Then I have pxpect.udeal.switch. I don't care too much about that because I already have a function in the standard library, so I don't need to add a dependency to my project if I want to use that. But then they have shd.witch. So, this is even more standard in the standard library, and this is the code, and if you have to look at the code, it's much, much more complex than it has any real access check. That means it checks that you can read it, that you can read the file, and you can execute it. And it has several details that I would not have thought I would not, it would have taken me one year of production to get right. So very nice. Unfortunately, if you go into documentation, you'll learn that this is only a Python 3, actually Python 3.3. So if your code, if your use case needed to work in Python 2.3, you might get a very nice find executable, it's still in the standard library, it's not as nice, but okay. Or maybe you can just take it as template and get it better. Or if you are Python 3 only, you have the luxury to use which. Which is great. Well, how many of you already know which function or how to solve this problem? Okay, a few. Right, I mean it's in the standard library, but I mean I didn't know it, and it was faster this way than to look for it. Okay, let's go back. Now this is a very simple example, but it also shows how things work. Now one of the point is that in this case, input and output of the function were really easy. When you have something that were the reasonable implementation, it's really easy. It's easy to write a test, but as soon as you look for more complex stuff, writing a test that is somehow implementation agnostic, that doesn't make too many assumptions of the implementation, it's complicated. It's more complicated. But actually Python is really great to write stuff that it's not too tight to the implementation, to the details of the implementation. Because it has dynamic measure, for example using that typing, you are not forced to guess the right data type. The in operator is extremely powerful, and a lot of classes even work nicely with the in operator. That is, instead of looking if the result of your function is a list, and the first element of a list is what you were looking for, you just use the in operator to see if somewhere inside your function, inside your result, there is what you expected it to be. And then there are, you may write specific helper, in particular, we wrote the not-aspects that helps you that leverages the inspect module to go even deeper into the search of where if your result actually contains what you expected it to contain, even if in crazy ways. So let's see how you would write a specification test in a way that is, that tries to be more independent on from the implementation. Here I want to parse in fc3986. This is also a real test, a real case. So I use the candidate fixture, I just rename it for, so I read it nicer, I use the test URI, and then I get all the functions that I will get will be passed this URI and I expect it to return some kind of tokens. And then here I will check if the schema and the path that I put in my URI are correctly parsed. Now, since most, there are a lot of false positives that are just strings, I mean just functions that return the same string as the inputs, I check that the return of my functions is not string. I don't care, I really want the string to be divided into tokens, so I don't want one string. I want some kind of list of strings. So let's see how it goes. This is the naive implementation in the sense that I didn't use any special trick except Python standard in operator, that is overloading, etc. Now this is going to run, come on, and usually I have different common lines that can be passed and those mostly need to restrict the search space. If you already know that some packages are not useful, you want to restrict the search space so you get faster. But this one candidates from all, it's the more powerful, it's just search for everything and anything in your environment. So this is where... Obviously, I tested just before the... I don't know. Let's see, I have a second run. Now let's see, since it takes a little bit of time to run, I also tried the second example, that is the same parsing function that is instead tested, the test is written using some advanced functionalities. The splact container in other specs generic, it gets an object and it makes it's a proxy object that when you use the infunction, it tries really hard to see if the item that you're looking for is somewhere in the object. So for example, it looks into the attributes, into the properties, even if it's an iterable, it looks inside every item inside the iterable. So it's extremely thorough. Let's see if we manage to not kill the queue. So apparently they're both running. So on this screen, I have the naive test, the one that crashed before. It was some kind of race condition because it's going okay now. And now let's see what the results are. Okay. I got several results. Now the first three results in collections doesn't look very good because keys view, chain map, user string really look like false positives. They're not trying to do anything with RFC or URL parsing but it's just their packaging somehow, the string that you are giving them. But then you have this RFC3986 API, you reference that looks very nice. But also the URL parts, you are parsed. That means you hit function that are able to do this both in a package and also in the standard library. Now what is interesting is that in both cases, both URL, the URL parts inside the RFC3988 package and also the one in the standard library, they don't return lists. They return classes. So how exactly this worked with the class? The point is that a lot of people are quite smart and they give you some way to get to access stuff or to test stuff in an implementation independent way. That is the two implementation that they used actually provide a underscore, underscore, or contains underscore, underscore method that tests exactly if they managed to find pieces that test exactly like a string, like a, sorry, a table or list. So it's not a simple type but it's a class that behaves like a type. Very nice. So you can happily use this one for most of your need but if you need more features, you may explore the code and you see this special package as more features. For example, it's able to recognize the username which the standard library function doesn't. Now there's something even more interesting. The other test that one that uses a dedicated proxy object to do the containment test has found one more object that matches. And this is a class in the PIP product that actually does the right thing but doesn't provide the nice containment helper functionality. So we managed to get it as well because the helper function tried very hard to find if the postgres and the path, the schema and the path were inside the class. So this is a way to get to test results in an implementation-independent way but then I want also to pass arguments in the independent way, in the implementation-independent way. Still in this case, what helps me is the parameterized marker of Pyvest. For example, in this case I'm looking for a function that just removes comments from a stream. And the main point is how do I represent the stream because this is my text and this is the readout of my configuration file, for example, and I want to strip these comments here. So how do I do? I use a parameterized argument so that I can say, okay, you have different functions that will make this text, this comment, into different shapes. I can pass it as it is. I can pass it as a list of individual lines. Or I can pass it as a list of individual lines with numbers. This is how my application actually was doing this part. Or I can pass it as a file. Now in this case, since I have a lot of parameters, I will run not just 5,000 times, but this will run 20,000 times. So I prefer to restrict my search by just including any function whose name matches this regular expression. So I want something that has to do with comments. This makes everything much, much faster. And here it is. So I find an ignore comments function in PIP, very good, because PIP is something that I might assume it's a light dependency. And this tells me that the text to stream that passes is the third one. So I go back here. It's 0, 1, 2. So this is the way, which was exactly the way I preferred. I could have worked with all the other trees, but this means that I don't even need to change my application to use that function. So by the way, extremely fast. This is the ignore comments. It's very simple. And it has also the feature that keeps the line. If it's empty. It also returns the line number because it doesn't return all the lines that you pass. And loop at the other function just below. This function takes options, which is a special class. And this class must have the skip requirements regx, otherwise it crashes. Oh, God. Even if I needed this, I would never, ever manage to pass the correct parameters to it because these parameters is extremely tied to the implementation. So I do tests that are quite loosely coupled with the implementation. I try to be as implementable as possible, but I only find functions, callables, or classes that are good code. They are not, they don't mix implementation details uselessly. It could just have you skip. This skip requirements regx could just have been a keyword argument with the same default. And the function would be as useful. And I would have been able to search for it or to use it in general. So when you search, you may get only relevant results, which means your query is just perfect. Or you have to refine your query. If you don't get any results at all, which happens quite often, it means that your test is too strict. And you probably need to remove test cases, edge cases, or probably just use a lower number of normal cases. If you find a lot of results, but they are not relevant, it means that your test is not, it's too weak. It's not strict enough, so you need to add more cases, more describe your feature better, and probably add more corner cases. If you appear to go from no result at all to no relevant result and back, it means that you don't find anything you most probably are looking for a function that is not in your environment. Now this is the base of code test driven reuse, which is something that has been studied a little bit in the Java community. And the idea is to use test driven reuse, it's just that you start like test driven development, you start your test, maybe you try to write it in a more independent way than you would do if you already know what is the implementation that you are doing. And then you try to search if you find a function that already works. If any code, if any function pass your test, then you have three options. So if you don't find any function, it's test driven development, you have to develop it, so fine. Otherwise, you may just import it, that means you get the dependency, and all you may fork it, that is, you get exactly the same code, test, you accept the license, and copy it to your project, or you may just have a look at it to see how many details you didn't think already. Another trick is that you may just use the test driven code search, which is a tool by itself, unit test validation. If you wrote a test, you think it's a good test, then you make a search with it, and you find a couple of totally unrelated functions, it means that your test is too weak. It finds false hit. So limitation of future work. The main point right now is performance. Then you may do a lot of things like extending the search space and making more tools, but then you get even more work to do, and so performance, performance, performance, and validation, et cetera. It would be very nice if this was not done on your machine, but on the web. So what we are trying to do is to make kind of a search engine on the web. If you want to know when things are starting to roll, write an email to the email here, and we are looking for people who are willing to test. Conclusions. If you start using it, you will recognize much better what are good tests and what are good codes, and you will tend, at least this is what we notice, we tend to write your code so that all the implementation details are filed as way as possible, as simple or as intuitive as possible. Thank you for your attention. Do you have any questions? Do you filter somehow already on, for example, the number of arguments that can be passed and similar things, because if a function doesn't take any argument and you need something that takes one, then there's not a valid candidate, for example. I didn't understand. When you look for candidates of things that solve your problem, do you filter already on the things that you've done? Right now, no. This is one of the reasons a web search, I mean, a curated index of objects would be nice, but it's very difficult to do it on your machine. I mean, with Python, you may tell how many arguments you are in a function, but not much more because you do type, you might not actually want to be too strict. So the idea behind having a web search engine is that you have a curated index of what kind of function may fit a particular test or not. Is there anything for timing out functions that could take a long time? This is already taken into account. Every test has a timeout of one second. So a lot of the stuff that are tried are timeouts. I use prompt, row input, et cetera, doesn't give any problem. The real problem is when you call C extensions and they just crashed in Teppadur. I have a long list of, I have a long blacklist for this kind of things. Another question? Yeah? No? Ah, see. So how do you deal with multi-argument functions where I don't know what the order of the arguments is going to be and what's the time complexity of that? Okay. So this is what he, in the first row, tried to do at the beginning. There is an automatic permutation of arguments. I refuse that because right now to get correct, it's more important than to a large search space. But as I was writing this talk, I noticed that you can easily use the parameter. You can easily parameterize your function just switching arguments. So it can be done right now with, by just passing the parameterized with switching. And the complexity, it's very hard if you have two arguments, it's two, but then it's n factorial. So with four arguments, you are already very, very heavy. It's very easy to go into the hundreds of thousands of tests. Now it was really a small environment and for educational purposes, etc. Thank you very much for everything. See you next time.
Alessandro Amici - Test-driven code search and reuse coming to Python with pytest-nodev We will present the test-driven reuse (TDR) development strategy, a natural extension of test-driven development (TDD), and how to execute it with [pytest-nodev] an Open Source test- driven search engine for Python code. When developing new functionalities developers spend significant efforts searching for code to reuse, mainly via keyword-based searches, e.g. on StackOverflow and Google. Keyword-based search is effective in finding code that is explicitly designed and documented to be reused, e.g. libraries and frameworks, but typically fails to identify reusable functions and classes in the large corpus of auxiliary code of software projects. TDR aims to address the limits of keyword-based search with test- driven code search that focuses instead on code behaviour and semantics. Developing a new feature in TDR starts with the developer writing the tests that will validate candidate implementations of the desired functionality. Before writing any functional code the tests are run against all functions and classes of available projects. Any code passing the tests is presented to the developer as a candidate implementation for the target feature. [Pytest-nodev] and other nodev tools that help implement TDR for Python are newer than the JAVA counterparts, in spite of that we will present several applications of the technique to more and more complex examples.
10.5446/21085 (DOI)
We've got our next talk from Alessandro Molina, who will be telling us about moving away from Node.js and on to Python. If you could all welcome him, there will be a chance for questions at the end. Thank you. Okay, thank you. First I would like to start by telling you why I decided to have this talk. Because I know that probably many of you are already using a solution to transform and manage their assets. Probably you have been using it with SAS for months here. I don't know. And probably in this solution Node.js is involved in many ways, at least around the tools that perform the translation or the transpilers themselves or whatever. But I know that many people approach the solution that they are using today for the reason that they don't know there are other ways to do that. Most people have been doing that that way. If you look on Google or wherever, how to do that, the first result is probably how to do that with Node.js. And so people have been approaching that kind of solution mostly because that's the way you are meant to do it. But there are actually very good alternatives that can solve also the problem of having to cope with two different languages. I know for sure that anyone in this room is a Python programmer, but I'm not sure that everyone in this room also is as proficient with JavaScript or Node.js or whatever. So having to maintain two different environments with their dependencies, different tools, different package managers, and install both of them on your, at least, development environment, if not even on the production environment, is not always what you want to do. So let's talk about something that probably happened in the life of every one of us, which is that you've been able to start your project using Python for everything. Your web framework is probably Python-based. You serve your API or even your web page through it. You are able to run it using a Python-based solution that you can write plugins for using Python. So we can deploy it using supervisors, like a whiskey, whatever you're using, that it's probably comfortable because you know you can go into the code and have a pretty good grasp of what's happening or write plugins for it and extend its behavior and so on. You're probably also deploying it using a tool like Salt Ansible or Docker Compose, which are in Python too. And you probably monitor the state of your application using a tool like Sentry, Datadog, Backlash, which have all Python agents, some of them even have backend code written in Python and so on. So we are fairly able to do everything we want by going into the Python code, messing with it, extending its behavior, doing whatever we want in our stack. And that's only not true for the assets part because one day you probably go on holiday, you come back and the front-end guy introduces a whole new language, a whole new dependency manager, a whole new set of things in the project which now need to be installed through NPM, through having a new interpreter in your system and things like that. Which is not bad because Node.js actually has a really good set of tools for doing this as a good set of transpiles as most of them transpile to JavaScript and have been written by the JavaScript community, have been written of course in Node.js itself. There is a good set of tools to automate testing of your JavaScript part and front-end part. And there is a good set of tools to automate tasks, like for example, Grunting Group, which are actually made to do their job and provide pipelines that transform your assets and things like that. So the side effect of this is that while it's a great tool and everything you usually need, you now need to have a package manager to manage the package managers. Because you need to be sure that in the environment you are working on, you will have both PEEP or the premade wheels if you use the binary distribution. You need to make sure you have NPM to be able to set up the working environment from scratch, at least on your continuous integration or on the developers' machines and so on. So you will need a tool that installs both of those. In most simple cases, it might be just an up-get itself. In more complex cases, you might want to provide something like a Nancy Bol script to actually deploy the working environment. Then you have two different places where you need to be sure to update your dependencies, because we will have the dependency for NPM and we will have the dependencies for PEEP. And in the most simple case, two different people work on the two different parts of the system and they update each one their own dependencies. But in some cases, there might be features that cross through the bridge of the two parts and you might need to be sure to make different 10 of the new functionality you added. You will need to add the dependency in both places and the client and several side dependencies of your functionality and make sure that they get installed both otherwise your feature won't work as expected at all. So you will probably end up adding a third solution on top of all of these to actually manage this complexity. And there are many and so far there is not really a standard factor to rely on. I mean, you can probably try to achieve this by using Ansible for everything. You can probably try to achieve this by doing pre-configured docked images. You can actually try to solve the solution in many different problems. But it's actually a problem that you should not need to solve because it has been introduced for a purpose that in most cases can be solved without introducing the new technology that is actually triggering our problem with different stacks and dependencies. And there are pretty good frameworks in Python to manage assets. One of them is actually WebAssets, which has been the one that I prefer over the long run because it provides a really simple interface. You can configure it both through an API and through simple YAML files and it provides also the front-end part of your assets whenever you need to use an asset. You can actually use it by importing it from WebAssets so it will also take care of things like cache basting and things like that for you. So it can replace solutions like Grun and Gulp in performing the transformation of your assets because much like them works as a pipeline. So you get some kind of input, which is usually a file, and end up providing another output, which is usually a file itself, which would be the file transpiled, your CSS converted, or your images are scaled, and things like that. And the advantage of using this approach is that if you need something, if you need support for scaling images, if you need support for compiling less, if you need support for SAS or whatever, you just track the dependency in your setup UI because the less compiler for WebAssets is just a Python package like WebAssets itself. So if you need less, you don't have to remember that you have a step to run before your application. So you don't have to remember that you need to perform MPM install. You don't have to remember that you need to run Grunt. You can actually make everything automatic through Python by having your application that when it's installed, it will install the support for less. And when it starts, it will automatically compile the assets without having to actually provide yourself a solution to the death. And it actually works with any WISCII framework. So you can even use it as a middle one around your framework, which doesn't care about the language and the framework you are using, and allows you to manage your assets independently for the framework, even if you use a plain WISCII application without the framework at all. And as I told you, it's actually providing an HTML side, API, to inject the resources, which is good because in many other cases, when you inject the resource, you generate it through maybe Grunt.goold. And then you will have to provide the solution for things like cache-pasting yourself. So in case the resource changes, in case you update the CSS file, you want the browser to load the new updated version and not keep using the old one just because the browser has it in cache. And usually this is something you might need to provide yourself, maybe by adding a timestamp to all the URLs or things like that. But WebAssets does it for you. So when it generates an hash of your resource, and whenever the resource changes, the hash will change, and so the new resource will have a totally different URL. And as you inject them to an API, it will inject the latest URL every time you run your template language. And the way you define your resources in WebAssets is actually through bundles. So a bundle is actually any kind of resource. You can even be a bundle made of just one single thing. If you need to translate your single CSS file from SAS to CSS, you can create a bundle with single file inside. And the bundle is defined as something that has a name. In this case, we have two bundles, style and jso. And each bundle might have a filter, which for the first case is CSSUtils, which is used to minify the CSS. And we'll have an output, which in this case, for simplicity, is just a file I've coded. So everything inside the bundle will be minified and squashed into that style.css file. And actually you can have content, which can even be a bundle itself, because you can see a point where we have a content sub-entry, which has CSS files inside, and provides a different filter, which is libSAS, which is used to convert CSS to CSS, of course. And libSAS itself is a Python package, so you can actually just add it to your setuppy and everything managed through your package manager. So what happens is when our system will need the style bundle, it will actually end up compiling, performing all the transformation from the nested bundles. Like in this case, it will start by transpiling all the SAS files to CSS files. And then it will perform the upper part, which in this case is the CSSUtils filters on all the files we specified and on the result of the previous bundle. And we will end up with a single style.css file, which inside that's all that CSS files, or our CSS file transpiled by libSAS, and the result is then minified too. And the same happens for JS all, which actually in this case applies to JS mean filter, which, as you can imagine, performs minification of JavaScript. And it will get all these files and we will squash them into dependencies.js and we will minify them, of course. And to use them from your frontend, it's actually pretty simple as Web assets will provide you with an environment that has knowledge about all the bundles you created in the configuration. And you can just inject the resources through these environments. In my example, the environment is owned by the global application object, which is that G object you see there. And what's happening is that I just create, I loop on all the resources of the style bundle and create a new link entry for each CSS resource. You might be asking why I loop, as I actually have the single bundle named style, so there should be only one. And the reason why I loop is that actually if you run Web assets in the bug mode, you won't perform the minification and squashing anymore. So you can debug your resources separated and then when you are sure that things work as you expect, you turn the bug mode off and you end up having a single resource. So with this in touch, you are sure that everything works both when you run in the bug and production mode, so both when you have a single or multiple resources generated by Web assets. And the same happens for the JS old bundle because I just loop through all the URLs provided for the JS old bundle and I inject the script back for that. So in this case, the example in made using the Kajiki template engine or the game she won, the syntax is the same actually. But it works in any template language. If you look at the Web assets site, you will find the samples in GINJI and so on. But we didn't really remove the problem totally because still for some more complex field there, we will need to have NodeJS available. For example, if I want to convert my yes6 code to a yes5 to JavaScript that the browser is actually able to run, I will probably need to bring in something like Babel. And Babel actually is implemented in NodeJS, so I will need to install NodeJS and tell Web assets where it can find the Babel executable file. Which is actually not really good in my opinion because I didn't really solve anything. If I need to have NodeJS to perform the Babel, at that point it makes sense to just have NodeJS for everything and don't have to actually have parts of my assets pipeline on one side and parts of my assets pipeline on the other side. And that's the point of this talk actually. What I wanted to do is solve this problem. Not having to rely on NodeJS at all in my Python environment. And that's why I created DuckPy. DuckPy is actually a replacement for NodeJS in many ways. It's specifically meant for assets management. So you won't be writing Web application on top of DuckPy. It doesn't have the concept of any request. It doesn't have a server inside. It's just a JavaScript interpreter. We built things transpiler for the most common environments and most common languages and things like that. So we can now just have DuckPy as a dependency of our setup file in our requirements. And we know that whenever we need something that relies on JavaScript, we just have DuckPy installed without having to have some extender tool, taking care of installing it and so on because it's just a Python package. And there is a Python package which actually has no extender dependencies. DuckPy itself comes self-contained. And the only thing you will need is a C compiler because currently there is no binary distribution, mostly until we'll support for Linux environment. It's totally clear how it should work in a reliable way. Currently, but the C dependency itself doesn't use any C library apart from libc. So as far as you have GCC installed, you don't need anything else to actually install and compile DuckPy. So you just run pip install and you will end up with DuckPy installed at work. And the reason why I created it is that because I wasn't really satisfied with the other existing solutions because like PyXJS, SpiderMonkey, V8, and so on for Python require extender tools like V8 and SpiderMonkey. And it's usually really hard to build those. I don't know if any one of you ever tried to build SpiderMonkey, but it's something we spent like two days only trying to get binary there once and worked. So it's not really easy to have them integrated in your install and build process. And DuckPy is also explicitly tailored for web development. So it means that most things you will need are probably built into DuckPy to make your assets pipeline in Python itself. A simple example is actually compiling CoffeeScript. So I don't need to install anything because in DuckPy itself there is the CoffeeScript compiler built in. So I just import DuckPy, run the CoffeeCompiler function, and I get the JavaScript generated out of that CoffeeScript. And you should notice that this is not something that will have major problems or that might not be compatible and so on because it's not a CoffeeScript compiler implemented in Python. It's actually the real CoffeeScript compiler in JavaScript itself that it's running on top of DuckPy. So whenever the CoffeeScript compiler is updated with a bug fix, a new release, a new support for language or whatever, it will be just a matter of replacing a JavaScript file, maybe fixing two or three things inside that file, and then you will release a DuckPy. We'll have support for that without major issues. And the same applies for Bubble. I can convert my ECMAScripts6 to plain JavaScript just by calling the BubbleCompiler function. And you see that I get out plain JavaScript out of my class declared in ECMAScripts6. And also for TypeScript. So you can actually create Angular 2 web application using DuckPy and no need for Node.js at all. I actually did it for real. So you can declare your application, your class in TypeScript and compile it, and you will get a compile JavaScript out of it. And as I told you, AngularJS2 perfectly works on top of DuckPy. So this actually solved the problem of compiling and transpiling my resources, my most complex resources. For simple things, I can use web assets which already provides all the filters I need usually. And for more complex things, write, transpiling, TypeScript, ECMAScripts, and so on, I can rely on DuckPy, which provides the filters or web assets itself. So I can just import from the DuckPy the filter for TypeScript, import from DuckPy the filter for BabelJS, register them into web assets, and from that point on, I will have support for TypeScript or BabelJS inside my bundles. So in this case, you can see, for example, that I added a bunch of ECMAScripts6 files which are compiled, minified into the JS app bundle, and that's declared by the fact that they use the BabelJS filter, which is provided by DuckPy. And you don't only stop there. You don't even need MPM anymore, because DuckPy has a package manager for MPM.org built in. So if you need to have a JavaScript dependency in your Python program, you can just use DuckPy.install.js package, specify the name of the package, the version that you want to install, and if you want its optional, the directory where you want to install it, by default, it will install it inside the JS Vendor directory of your web application, which is from the environment of web assets. And it also will install any dependency of the package. So if your JavaScript package has dependencies itself, it will end up installing them all too. And if you mix these with the setup tools, whichever setup requires function, you can tell that your web application setup requires DuckPy, and that all the JavaScript dependency installed by setup tools itself. So when you do pip install my web app, you get all your Python and JavaScript dependency installed, without the need of any external dependency manager. And the only thing you should notice is that DuckPy is not full of features as the MPM original one. So, for example, in case of a collision, in case two different packages require two different versions of a package, which collide, which are not supported one by the other, DuckPy currently would just take the newest one. So it would take for granted that the newest one should work with both of them, but will not make more advanced things like filtering on the minor version and things like that. So, in some cases, if you only specify the high-level dependency, you might end up with colliding dependency installed, but that's something you can solve by just specifying the precise version for each dependency you want to add. I plan to extend this behavior by providing full dependency resolution, collision and version resolutions, because the dependency resolution is already provided, but the collision part is not. But currently it's not been a major issue for me, because I tend just to specify the precise version of each of my dependencies to make sure that the software is always reinstallable even in 10 years from now. And one really interesting thing is that DuckPy is compatible with Node.js also for the requirements of packages. So DuckPy provides a required function which is able to import Node.js packages. And that makes possible to use something like React to render your script from server-side code in Python. So we can actually create isomorphic web application in Python alone without any to mess with Node.js anymore, because we can just render the isomorphic part, the part that uses React from our Python code by using DuckPy.js compile and running then the render to static markup code which will render the React component to play in markup. So we can inject this markup into our template and then React from the client-side continue from the markup we generated. Actually, if you want client-side React to continue from the markup we created from the server, you should be using render to string instead of render to static markup, but that doesn't matter. You just switch the function name and things work. So we can provide the first version of your web page render from the server so the user sees the result instantly and then the client-side are very kick-seeing and continues from there without any problem, because we actually run the real React code from our server. So not only that, if you need to export your Python code from Python, of course, and make it available inside JavaScript, as you can call JavaScript from Python, we can actually call Python from JavaScript. We just use the export function feature of DuckPy, and in this case we export the sorted function which is built into Python and sorts whatever it have about you throw at it. So we export it as sort number. So inside JavaScript we will be able to call it using call Python sort numbers and passing the numbers that should be sorted, and we will get back as a result, of course, the sorted numbers. And also you don't have to care about references to the objects and memory management, because there is a choice I made which is to pass everything by value. So every value you pass back and forth from Python to JavaScript is actually copied. It's not the original object itself. This allows much simpler resolution of problems in memory management of references, and you won't end up with leaking dangling pointers in Python because some code in JavaScript is leaking memory, which needs a Python object. Leaking memory in JavaScript is pretty easy sometimes, but that won't actually do anything to your Python code because you just pass back and forth copies and not the original object. So this actually did everything I needed. I was really happy with that pie as a solution because I could actually manage all my dependencies from setup.py without the need to maintain MPM or an external tool that maintains both MPM and PIP. I could perform all the transpiling in Python, so if I needed to add a feature or change something in my transpiler, I could just mess up with Python code. And that pie actually has been a quite performing solution for me. I would say you should not use it on production because there is a lot of C code. I mean, not on production. You should not use it in the live running web application. You should use it during the packaging of the web application because there is a lot of C code inside. I cannot guarantee that you won't crash with a segmentation fault while ending 2,000 requests a second and things like that. But for everything that is related to packaging and building the resources and so on, it always works without any problem for me so far. The bugs that I found are being solved pretty quick and it has been like a few months that I have been using it without finding a new bug. So if you want to try it, feel free to. Duck Pie actually works for practically any use version of Python from 2.6 to 3.5. And if you find any bug, feel free to open the issue on GitHub because it's totally open source. It's fully tested. I guarantee there is one under coverage on all the Duck Pie code. And I have examples that ensure that all the transpiler is still working whenever I update the JavaScript side code of everything. And to use it, you just have to peep install it and have fun with it. So thank you. If you have any questions. Thank you. Thanks very much. It's very cool to not have to run NodeJS on my web server. So thank you for that. Do you know if your project will work with the less CSS compiler transpiler? It should. I didn't try it on Duck Pie itself because you have a less compiler for WebBusets. So I mostly use SAS, but I know there is a less compiler for WebBusets itself. So it never came to my mind in my need. But it should be a matter of just loading the JS file of the less compiler, running it with Duck Pie, and see if it does what you expect. Usually it does. The only problem you might face is with regular expressions because Duck Pie actually applies the JavaScript standard more tightly than NodeJS. So some syntaxes that NodeJS consider valid in regular expressions and are actually not. Duck Pie will tell you, hey, this is not valid. You need to escape this part of the regular expressions. As far as it's a matter of fixing the two free regular expressions, they will call it, we usually just want. Thank you very much. A semi-related question. Have you investigated at all the state of pure Python JavaScript interpreters? I'm not saying it's not turning. That's a complex question because, yes, I did like a year ago. I tried to use some of them, but I'm not sure. At least a year ago, none of them could actually be so resilient to make sure that you throw a NodeJS library to it and it will just work. For example, Duck Pie has invested pretty much some time into providing a compatible support for a required function to make sure that the dependencies resolution and so on works exactly the same as NodeJS. So I did it. I did it some time ago, so it might be that the situation changed. I just wanted to know the answer. Okay, yeah. All right. I think that was our last question. Thank you very much. Thank you.
Alessandro Molina - Moving away from NodeJS to a pure python solution for assets When working with WebApplications it is common to rely on an asset management pipeline to compile scripts, minify css or preprocess images. Most of the tools available today rely on JavaScript to perform those steps and always forced Python developers to rely on NodeJS to have grunt perform the pipeline tasks, coffee-script to compile their CoffeeScript or lessc to build their css. This causes longer setup times for projects newcomers, complex development environment, working with two package managers and dependencies that you use once a week but still need to be there. The talk will showcase the DukPy project and focus on how it is possible to build a pure python asset pipeline relying on DukPy to run javascript tools and WebAssets framework to perform the most common tasks that usually Nodejs and tools like Grunt handle for us, greatly reducing the development environment complexity and making its setup as simple as ‘pip install’. The talk aims at explaining the complexity of managing an asset transformation pipeline through tools like Grunt, especially during deploy, test suites or when a new development environment has to be created, and showcase how this complexity can be dodged by using tools like WebAssets and DukPy. No more need to keep around two languages, two package management systems and manage your dependencies between them by youself. Just pip install your app and have it working.
10.5446/21087 (DOI)
is Cloud ABI talk. Just make sure you're the right ones. Yeah, so we have Alex Wurmer speaking to us about the securities on politics. Greetings, everyone. Thank you for coming to this briefing on the inquiry into the Sol3 defeat 20 cycles ago. Contents of this briefing are classified duchess royal bud line. Anybody who does not have that classification must leave the room now. OK, with the formalities over, we can begin. My name is Alex Wilmer. My mother was Susan Wilmer. She was chief of docking during the Sol3 harvest 20 cycles ago. It was her that allowed that fateful ship to dock. Scout ship TLV 3495. The one that we'd presumed destroyed aions ago. This was the ship that was carrying two human cable repair engineers. Those cable repair engineers were carrying their Jolly Roger super weapon. That led to the destruction of the entire fleet and the loss of the Sol3 harvest, along with nearly a billion mines. I am being part of the team for the last 15 cycles investigating the reasons for this defeat. There were many contributing factors. Our synchronization signal impinged on human communication bands. They were able to detect this and from this calculate the time of harvest. This resulted in their human leaders surviving the initial harvest attack. There are numerous smaller incidents, such as trainee GX firing on a human welcome wagon. We've seen such attempts to communicate before. They've of course never been successful. But in this case, critical seconds were lost in confusing the humans. And they were able to escape the initial fireball. Another example I would like to highlight. Following the initial counter attack by the humans, which was of course futile. Their kinetic weapons, their missiles could not penetrate our energy based shields. But in one case, a downed ship did lead to the capture of the pilot. The pilot was taken to the human leaderships, where a pilot was tortured, interrogated, mine probed. During this, the pilot did reveal our negotiating position, our harvest tactics and our general disposition. This resulted in counter attacks by the humans of a thermonuclear nature. Of course, this was a still futile. But they were contributing factors. Finally, there was one more that I'd like to highlight. The captured craft was not challenged, was not questioned when it approached our main harvest ship. This allowed it to gain access to the command carrier. This allowed the humans to gather intel on our initial invasion plans. None of these pale in significance to the principal reason for the Sol III defeat. Capture of the scout ship. From this capture, the humans learned of our existence. They learned of our biology. They learned of our technology. Critically, they learned of our UNIX operating system. From this unit, from our technology, the humans went on to develop various things. Human code words include Roswell, Area 51, UNIX, Bell Labs, ARPANET, AOL, E-mail. All of these are pale imitations to our consensus net, of course. But they gave the humans a critical foothold into our protocols and systems. That allowed them to upload a virus to Reclamation Pump 369282. That replication pump then communicated on consensus net, spread, sent commands, fleet-wide. Resulting in the disabling of all protection fields. From this, the humans were then able to use one of their primitive thermonuclear devices, destroying our carrier ship. Our thoughts, of course, go to all the families of those aboard. So for the past 15 years, we have been carrying out the investigation. There are numerous lessons that have been made in procedure and command decisions. This briefing will concentrate on some of the technological implications. We find that the root cause analysis, pump mon running on that pump, was vulnerable to the humans' attack. That is how they got their foothold. That is how they were able to instruct all defense fields to switch off. Without that, their attack would have been useless. The problem with pump mon was not a simple buffer overflow or stack smashing attack. The problem was more architectural. Pump mon had numerous capabilities that it did not need in order to fill the role of monitoring that pump. It could read global files, it could monitor processes, it could create network sockets to other places on consensus net. All of these were unnecessary and all of these were exploited by the human Trojan. The table you see is a quote from the report. Please refer to that if you need the full details. The architectural flaws of UNIX boil down to discretionary access control. That is, access control is not enforced by default. There are things that are open that do not need authenticated access. This means that programs on UNIX systems start with excessive capabilities. Once compromised, programs can acquire further capabilities simply by opening them. There are global resources and global states throughout the UNIX system. This obstructs running programs securely. It obstructs writing testable programs because tests have to try and inject these normally global resources inside a restricted test environment. It obstructs writing reusable programs because these programs assume a full UNIX operating system and it is very difficult to audit them to say what do they actually use. System administration just does not work at harvest fleet scale. Beyond a million nodes, we just do not know what these systems are doing. Our team would like to propose a human technology that has actually been adapted from their reverse engineered version of our UNIX. This human technology is called Cloud ABI. It is a relatively recent invention for the humans, approximately two years old. Under Cloud ABI, programs start with the ability only to spawn threads and to allocate memory. Unless they are provided with external resources, they cannot access them. They cannot acquire further access to external resources. They can only do that through the capabilities provided to them when they are started. The implications of this are it is safe to run an unknown Cloud ABI binary if it is given no resources. The worst thing that it can do is allocate too much memory and burn through CPU. As a result of this, with explicit capabilities passed into the program at startup, it is much easier to audit these programs to say what they need. As a consequence, it is much easier to test these programs. This leads to better release engineering and could allow for higher level orchestration, the ability to migrate processes between hosts rather than virtual machines or containers. This could lead to more efficient resource use in fleets and certainly to more secure resource use. To give you a bit of background on this Cloud ABI technology, it was initially developed by a human called Ed Shuten. He is located in the European continent. It was initially for the human derivative of our Unix called FreeBSD. It is now available for multiple human operating systems and is compatible with our SenseNet and CloudABI and original Unix. Some of you may be familiar with human technology called Capsicum. Cloud ABI is derived from this Capsicum project. In Capsicum, processes initially get access to global resources and can acquire further resources just like any other Unix process. But a Capsicum process can call a function called CAP Enter, after which CIS calls that allow it to acquire further resources are blocked. They return an error and or result in the process being killed. This allows for more secure processes after they have left their initial startup phase. The problem with this Capsicum project is that integrating external library into a Capsicum process causes runtime errors, strange behaviors, heisen bugs. Because a library buried deep in the call stack might try to open a file, might try to initialize a pseudo-random number generator from a device and then fail and fall back to a less secure method such as the time of day or the current PID. The innovation that Cloud ABI takes is to make Capsicum default. It is always on. Cloud ABI processes cannot call open. They cannot see global resources such as process tables, file systems, or user databases, unless explicitly given access. To give you an idea of what we remove, all of these APIs are unavailable to a Cloud ABI process. The first category is simple common sense. These are APIs that were not greatly designed in the first place or they tend to result in buffer overflow bugs. These are thread safe, buffer safe alternatives already available for both Unix and Cloud ABI. The second category is basically the Unix file system. On a Unix operating system, a process can open or attempt to open any file by its path. This is impossible in Cloud ABI. There is no open function. There is no stat function. There is no get PID. There is no get UID. Next, we move on to mutual state functions. These are ones that tend to have an effect process-wide, regardless of whether a process is multi-threaded. These are removed because they make programs harder to reason about. Removing them simplifies the API and there are thread safe alternatives. Standard in, standard out and standard error are also removed, simply because they are a global resource that should be explicitly declared. With ARGV, the method of parsing in arguments to a Cloud ABI process is incompatible with ARGV, which relies on acquiring resources based on string values. This is disallowed. By removing these things, we add one simple concept. Unix file descriptors become capability tokens. These are the tokens by which a Cloud ABI process acquires all resources. All APIs in Cloud ABI that allow acquisition of new resources require an existing file descriptor to be passed in. A file descriptor might describe a directory, a file, a socket, or even the handle to control a sub-process. The second thing we add is a single application binary interface. This means that a Cloud ABI process, once compiled on any Unix system, native or human, will run on any other Unix system without recompilation. The ABI is available for the following human systems, FreeBSD, Arch Linux, Debian, Ubuntu. It is even available for their Mac OS. The support is in progress on the Linuxes, but with the next release of the human's FreeBSD, it will be a native feature. So it's best at this point to illustrate with an example. We'll be taking a very simple, naughty case of a Unix utility, LS. This takes the name or the path of a directory and prints out the names of the files and folders inside it. This is a very simple example, stripped down to illustrate our differences. You will note that the process is taking in a string, and assigning it to the variable derpar. It is then passing this string down to the operating system, and the operating system is acquiring resources on behalf of the process. If we did not see the source code of this process, we do not know what it would be capable of. It might list the directory, it might list the directory, and send those results back to the humans for further analysis. It might encrypt the contents of the directory, it might delete them. It could do any number of things we don't know without fully auditing the source code. Using some of the features of Unix, we can come closer to a cloud ABI design. In this one, the LS program does not take any string input. It receives only file descriptors. File descriptor 0 is the directory that we are trying to show the contents of. File descriptor 1 happens to be standard out. Given this model, if the program was unable to pass strings to the call to open and the call to list der, we could say that this process was not able to do anything other than act on the resources we provided. We could say that we read only access to a single directory and everything below, and write only access to a single file stream, namely standard out. The problem with this model is that it becomes very inflexible to pass in file descriptors in the exact sequence that they will be used by the program. So the cloud ABI system relies on a new mechanism called arg data. In arg data, there are a set of APIs to gather file descriptors according to a tree structure. The developers can acquire these by key name, as lists or file descriptors, or maps. In the example you see, we use a helper program called cloud ABI run to map a YAML file containing a description of the input to the program to the file descriptors that the program will receive. In this example, the Python executable is not a Unix executable, it is a cloud ABI executable. Therefore, during the build of this Python executable, any reference to standard in, standard out, standard error, the C-level function open, the C-level function stat, the C-level function opened would have resulted in compile time errors. As a result, we can safely say that this execution of this Python script cannot do anything except read the contents of a single directory and write the output to a single file descriptor. This makes this process safe to execute without trusting its source. We need only know that we have exposed the inputs we provide to that program. The inputs are explicit, not implicit. A further example, it should be mentioned at this point that this example is, at the moment, hypothetical. The Python port to cloud ABI is in progress. It cannot currently do this. Other programs written in other C programs are fully ported, and there is a cloud ABI ports set of packages available. To give you a further example of illustrating what might be possible, we show here an example configuration for a web server. The web server binary itself would not have its own configuration file. It could not read that file unless provided, and that file would contain strings referring to paths which the web server would not be able to open. In this example, we combine arguments and configuration into a single file, and this file is provided to the cloud ABI run helper in order to acquire resources on behalf of the web server. Where this web server compromised, it could not start listening on new ports. It could not open a network connection to send the contents of any acquired data out to the world. All it could do is serve network traffic on the socket that we have provided. At this moment, we ask what can we do in the future with this cloud ABI system? We might imagine a future where software appliances can safely run customer provided plugins or third party plugins without exposing the internals of their system or the entire operating system. These plugins will be provided with a limited set of file descriptors and would therefore be constrained in what they could do to affect the outside world. We might use this to isolate vulnerability, vulnerable systems such as pumpmon, or transcoding libraries for security cameras from fleet-wide security systems. By this means, we could avoid problems in error-prone libraries such as the human library image magic, or the various video encoding libraries that have extremely complex input requirements and, as a result, tend to have many vulnerabilities found. We might imagine the ability to use cloud ABI in order to implement the human system Amazon EC2 without the overhead of virtual machines or containers. Similarly, we might imagine the human system Google App Engine with the ability to submit programs written in any language, C, C++, Rust, assembly language. In theory, these would be safe languages to implement programs in and allow them to be uploaded to a third party cloud service without virtualization. This would allow us to compose applications, not containers. I shall now show you a brief demo of what has been achieved with the human language Python and the cloud ABI system. Of course, it would help if I showed this on the right screen. In order to run a cloud ABI program on this system, we can use the cloud ABI run helper. The Python binary you see has been compiled against the cloud ABI system headers and version of libc. The Python binary itself cannot accept standard input or write to standard output. The file we are providing is going to the cloud ABI run program, which is a UNIX program. It is opening resources on behalf of the Python binary. The Python binary is then receiving file descriptors. The demo gods. The contents of that YAML file look like this. At the moment, the Python binary is a work in progress. This is the first thing that got working with it. We have transliterated the native UNIX Python arguments into YAML keys, and the command is given verbatim. We are also able to execute system calls. In this case, the Python script, the Python binary, are burning CPU and then printing the result out to standard error. As a result of cloud ABI, though, this is the worst that this process can do. We can kill it and know that it has done no damage to the system as a whole, because it did not have access in order to do that damage. We can look at the contents of resource.yaml. We see that all it had access to was read-only access in order to import its standard library, write-only access to the standard error file descriptor, and the ability to execute a simple syscall. The work will continue on the Python port to the cloud ABI system. There will be a sprint running at the human event EuroPython 2016 on Sunday. If you would like more information, please visit these addresses on the human network. Our usual network tabs on their networks are in force. Thank you very much. So we have some time for questions. Hi, I'm Yarny. Thanks a lot for the talk. Hail the Queen. Okay, awesome. So I'm wondering, you know, we have in the community like a lot of tools that attack the same problem. We have app armor, we have things like OpenBSD, and SA Linux from NSA, no backdoors, I promise. So why another system? So the problem that we have found in our experience with app armor, SA Linux, such systems, is that the incentives with them tend to be wrong. It is not the creator of a piece of software that configures those systems. It is typically the distributors and the system administrators. So as a result, the configuration of the protection system, such as app armor or SA Linux, tends not to be in sync with the requirements of the programs that are running. So all too often administrators in the midst of battle on a fleet ship will typically just turn them off. More so within experienced administrators, but even seasoned veterans of multiple harvest campaigns have been known to switch these systems off when there is incoming fire. Yes, okay. Another question, if I may. Okay. So is this thing production ready? What's the overhead? And what is the biggest app you are currently running with it? The system is still in its early phases. It was conceived approximately two Earth orbits ago, around two and a half cycles, until the creator has been working on it quite a while and is an experienced developer as humans go. The Python part of this is most certainly not production ready. It would be tricky even to call it alpha. Unfortunately, the human responsible for its development, some inconsiderate human gave them a job. So there was not time to complete it before this briefing. Hi, thanks for talking. I'm curious about the support for binary libraries like BCBOX. Are they planned to be add to the Cloud API support? And we have seen a pattern come on the follow by humans. And they tend to use Linux a lot and avoid free BSD. Are they security focus tools? And we are also seeing a proliferation of a new tool called Docker, who is taking a different approach to security. It's the acceptance of Docker threatened the future of Cloud API. Are we investing time in Cloud API when we are going to face a different problem in the next harvest? So I'm pleased to report that the next harvest fleet is on its way to earth and they will pay for their treachery. The human technology Docker provides similar benefits to Cloud API. It has slightly higher overhead and is restricted only to their Linux operating system. The Cloud API for Linux support is 90% complete. It lacks integration with their distributions at the moment. We are working to improve this. What was the other part of your question, please? There is a repository available of human derived software called Cloud API ports. There are over 100 packages in this. I do not believe that Busybox is one of them. The Cloud API model is better suited to long running daemon processes than to interactive use. It can become quite cumbersome in order to provide all the file descriptors to Cloud API binaries in interactive use in a shell. That is possibly a development. If you wish to see if a package has been ported, I recommend visiting the Cloud API ports link that was included in your notes. Any other questions? Thank you very much. Thank you.
Alex Willmer - CloudABI: Capability based security on Linux/Unix Take POSIX, add capability-based security, then remove anything that conflicts. The result is CloudABI, available for BSD, Linux, OSX et al. A CloudABI process is incapable of any action that has a global impact It can only affect the file descriptors you provide. As a result even unknown binaries can safely be executed - without the need for containers, virtual machines, or other sandboxes. This talk will introduce CloudABI, how to use it with Python, the benefits, and the trade-offs. ----- [CloudABI] is a new POSIX based computing environment that brings [capability-based security] to BSD, Linux, OSX et al. Unlike traditional Unix, if a CloudABI process goes rogue it _cannot_ execute random binaries, or read arbitrary files. This is achieved by removing `open()` & any other API able to acquire global resources. Instead a CloudABI process must be granted _capabilities_ to specific resources (e.g. directories, files, sockets) in the form of file descriptors. If a process only has a descriptor for `/var/www` then it's _incapable_ of affecting any file or folder outside that directory. This talk will - Review the security & reusability problems of Linux & Unix processes - Introduce capability-based security - Summarize the design of CloudABI - its benefits & trade-offs - Demonstrate how to write Python software for CloudABI & run it - Point out the pitfalls & gotchas to be aware of - Discuss the current & future status of CloudABI CloudABI began life on FreeBSD. It also runs DragonFly BSD, NetBSD, PC-BSD, Arch Linux, Debian, Ubuntu, & OS X. The API & ABI are kernel agnostic - a CloudABI binary can run on any supported kernel. The design is evolved from [Capsicum], a library that allows processes to drop access to undesired syscalls at runtime. CloudABI applies this at build time to make testing & lock- down easier.
10.5446/21089 (DOI)
Welcome everyone. We'll have a talk by Alexander Steffen about writing testing C code with Python. Please welcome Alexander. Hello everyone. Thanks for joining the session. I work as an embedded software developer so I write firmware for microcontrollers mostly. Unfortunately this is mostly C code not yet Python code. Recently we've ported micropython to one of our controllers so somehow we're getting better. Before I start with the talk I'd like to know a bit more about you and your experiences with unit tests. So if you've written any unit test in any language yet please raise your hand. So get an overview. Okay great that's most of you. And whoever you has written unit tests for C code probably in C then. Okay that's probably about half of you. And last question then is who enjoyed the experience especially if you compare it to writing Python code instead. Well a single guy yeah perfect. So then maybe I can show you a more fun way to write unit tests for C code. Now you might wonder what my motivation is for that and some of it can probably be summed up with this quote here. That the C language combines all the power of assembly language with all these of use of assembly language. So with C you've got control of everything and you can control everything but you usually also have to control everything. You need to do everything yourself. There's little support from the language and for the testing stuff you probably don't need all this power. You're not constrained with resources. You don't have that performance requirements that you might have in production code. So you could actually use a higher level language to make it easier for you to write your test code. Don't do everything in a low level language like C. Now let's look into that in a bit more detail. If you write unit tests for C code with C code then there are some good things. You've got the same language everywhere. So as a developer you do not need to switch context between different languages, different styles, different syntax. And it might also be good for a lazy developer who only knows a single language. And of course if you're working in an embedded environment like I do then you could be able to run your unit tests on the target device or at least on a simulated device so that if there are bits and pieces of your code for example implemented in assembly you can also test those. But it's also a bit limited in some ways. I already told about the limitations that the language offers you. So you can only use C constructs which are not as powerful as Python constructs for example. You need to write much more code and you could in a high level language. But you're also limited by what the framework has to offer you. And if you look up unit testing frameworks for C code there are tons of frameworks out there but most of them are very basic. They don't offer advanced features that you might be used to when you look at unit testing frameworks that are offered for example for Python code. So a few frameworks only that offer mocking for example. And in the end you're also limited by what the ecosystem has to offer. For example we would like to test some cryptographic algorithms in our implementations and of course you can call into OpenSSL to verify some calculation but it's not really that easy and it might be nicer to do that in Python. Now maybe we can do better than that and I've prepared a few examples to show you how unit testing C code with Python would look like. So the first example is the most basic thing I could think of. We've got a single function in our C code and that just adds two integers and returns the result. So this is the header file, the public interface that we want to unit test and this then is the implementation of that function. It just adds the numbers and returns the value. And if you write a unit test for that it could look like this. So as usual with Python unit tests you've got a test case class as a container for all your test cases. The single function in there then is your test case. We've only got one here and it's rather simple. It loads in the source code that I've shown you before, creates a module out of that and then has an object on which it can call the functions that is defined in the module. This function returns the result and we can assert that the result is really correct. Now you don't see any C code in here and no construct that really do anything with the C code from before. The only mentioning thing that you see is the name of the module, the parameter for the load function and this is where all the magic happens. So let's look into that. The load function here consists of three steps. First it loads the source code on the module so it opens the C file, it opens the header file and reads out the source code and then it uses CFFI to build a Python module out of that source code. There are three calls that you need to make on the CFFI object for that. The first call, the Cdef call will tell CFFI what interface it has to export to our Python code. So we pass in the header file contents that defines the public interface. We want to test that so CFFI needs to generate the interface for us. Then with the second call we need to tell CFFI about the implementation of the function. So we pass in the source code here and the last step then is for CFFI to actually build the module that we want to have so it runs a C compiler in the background, builds the module and in the end as the last step we can import that module and return it to our test case. And that's really all you need from this example that I've shown you before. Now I've got three more examples that all built on this implementation so I'd like to quickly ask whether there are any questions for this example already so that you can better understand the following examples. You mean if you had more than one source file? Yeah, because in this example source file I have some sort of includes and dependencies to other source files and how do I cope with that? I have to compile them also and link them somehow. How does it work? Yeah, I've got some more complex examples of multiple files and with external dependencies and we'll show that later. Okay, any more questions? Otherwise I'll continue with the second example. The second example is still rather basic. We've got again a single function that you can call multiple times and we'll just add up all the parameters as we pass into it and return the current sum. This is its interface and this again the implementation. So now we've got a global variable that we use to sum up everything. The function just adds to it and returns the current value. And the unit tests now look like this. To make matters a bit more interesting, I've implemented now three unit tests, not only one. And so that I do not have to repeat this load call in every test case again. I use the setup method. This gets executed before each test case is run. It will load the module for the test case and then the test case can access the module just as before. You can call the function there, assert that the results are correct. But if I were to run this test case with the load function that I've shown you before, it wouldn't work. And why wouldn't it work? Well, in the source code, there's this global variable there. And the load function that we had before, it just imported the module at the end. And if you know a bit about how importing works in Python, those imports are cached. So if there are multiple test cases running, the first one will actually import the module, initialize the global variable. All the other test cases will just get the cached import back and it won't be initialized again. So the assumption of the test cases, that the sum always starts with zero, doesn't hold here, and so the test cases would fail. Now, there are several solutions to this. I'm just going to show you the simplest one. And that looks like this. The load function is still the same. Just the first line with the command has changed or got added. We generate a random name for the module. So this avoids all caching by importing essentially a new module every time this function is called, which might not be the most performance solution and will also use more memory. But it avoids nicely all the problems that you could otherwise have with caching or data. For this, I use the UID module, which just generates a random unique ID and depends that to the file name, which is then used as the module name. All the other code in here is the same as before. So each test case can still load the module and gets a fresh copy every time. You could also implement that in a different way. And when you have imported the module, just reinitialize it every time, what that would make would take more code. So I don't show it here. Okay. Then example number three. And here we are getting to multiple files now. Since all the other examples so far were very basic, it was just a single c file and a single header file. Now we take at least a second header file. And we want to do some mathematics with complex numbers. So we define our own structure for that. That has just two fields for the two parts of a complex number for the real part and the imaginary part. And we have that in one header file. And then we also want to implement a function that uses this type. So again, we use the example of addition, adding two complex numbers and returning the result. And we can implement it like this. We just add both parts together and return the result at the end. Now the test case for this, again, doesn't really need to know much about the C code. We load the module as before. And you don't even have to deal with the complex type that the header file declared somewhere. When you want to call the add function, you just pass in the lists here. And CFFI will automatically generate structures for that so that the C code is happy and gets the correct results. And also the result of this function call is a nice Python object where you can access the parts of the structure with normal names and can assert that all these results are correct. But again, for this example to work, we can't use the previous implementation of the load function. Because in the previous implementation, it just looked at the source file and the header file of the module that we want to test. It doesn't really know about the other header file that we also need. Now, if you remember the source code, you could say, yeah, well, the other header file got included into the module's header file. So it should be present there. But, unfortunately, CFFI cannot deal with these include statements. So what we need to do is we need to run some kind of preprocessor, like the C preprocessor over the source code, so that there are no more include statements in there, no other directive that CFFI doesn't understand. Otherwise, it would throw an error. And this is done with this preprocess call in here. Again, there are multiple ways you could implement that. I've chosen to just run the DCC preprocessor over the source code and get back the results. At the end, I've got one large string that contains the contents of both header files. And CFFI is happy with that. Now, for the last example, it gets even a bit more complex, because now we have some external dependencies. In this case, you can imagine you want to program a microcontroller. And maybe the vendor of the microcontroller provides you with a nice library like this here, where you can read GPIOs using simple function calls. The vendor has chosen to implement different functions for each GPIO that you can access. So he provides you with a library that has this interface here. But maybe in your code, you'd rather like to use this interface. You only want a single function call and a parameter to select the GPIO that you're interested in. Now, you can implement that in your own code. You just look at the parameter, call the appropriate function, and if you get a parameter that you cannot deal with, you'll return some kind of error code. And now, this is the code that we want to cover with our unit test. We don't want to test the vendor's library, so we don't want to use the read GPIO zero or one calls here. We probably couldn't use them in the unit test, because they might access some registers of the microcontroller that aren't there in our test environment. So we somehow need to replace those calls with our mock functions, so that we can run a test case that knows what the GPIO values are. The test case for that looks like this. The first change that you'll notice to the previous implementations is that the load function now returns two values, not only the module as before, but also an FFI object. That's part of CFFI's interface. And we use that in the first test case to replace the C function that we don't want to use with a Python implementation. So we define a function that has the same name as the C function we want to replace, and we tell CFFI, hey, when this C function gets called, please use this Python implementation instead. Don't use the C implementation that you might find somewhere. And so the Python implementation just can return fixed value, and the test case can call the function that we want to test with the correct parameter, and see that the value that defined before is returned in the end. And the second test case for the GPIO number one, it doesn't the same thing, but using a different construct. So in this case, we don't want to really define a function, but we want to use a mock object like you might be used to from the unit test library, and you can do just the same with it. You configure your mock object to return a value when it's called, and then tell CFFI, hey, this is not a function, but just something else that you can call, use that in place of the C function, and then the test case, again, works, can call this function, and at the end, you can also use the assert methods that are provided by the metric mock function. And in this case, again, we need to modify the load functionality. This is, again, for comparison, the old implementation, and we need to add some more code to that for this example to work. There's three changes here, all again marked with the comment. The first change is that it's not sufficient anymore to just process the header file for the module, but we actually need to process all the header files that are included in this module. So it just uses a regular expression to collect all the include statements, then runs that through a preprocessor, and as a result gets one large string again that contains all the include statements, all the content of the include files of our module. And the main work then is done in the next two lines, where we need to tell CFFI which functions we want to replace with Python code and which functions are implemented in our C code. So the first line just goes through the source code and looks for all the function definitions so that we know which functions are implemented by our source code. And the second line then goes through all the includes that we have, looks for all the function declarations in there, and whenever it finds a function that is not implemented in the source code, it will tell CFFI, hey, please insert a Python implementation in here that we can replace later. The functionality is all there in CFFI. We just need to prefix the function declarations with this external Python plus C statement, then CFFI will know, okay, I need to generate some code for that, and this will already make the compiler happy. It will find reference for this function so it can call it, and we can later replace it with Python code. And in the end, the last change is, as I said before, that we now need to return this FFI object also from the load function so that the test cases can tell CFFI about the implementation that it wanted to use. Now I'll show you in a bit more detail how this step in the middle works where we analyze the source code to find the function definitions. This is based on PyC parser, and this is the first part that collects all the function definitions. So PyC parser will analyze your source code and will build an abstract syntax tree out of it so you can later work this tree with the class that's already provided, and whenever you hit a function definition, this wizard function here is called, it will get the node out of the tree and can just ask this node, okay, what is the name of the function? We'll add this to a list, and so in the end, once it has worked through the whole tree, you've got a list of all the functions that are implemented in the source code, all the names of the functions there. And this is then used in the second part, again based on the PyC parser module where we actually parse all the include contents into an abstract syntax tree and then tell PyC parser to regenerate the corresponding C code from that so that we can modify some bits of that. And PyC parser already has support to regenerate code from the tree, and we just took into that, and whenever we see a declaration for a function, this is then again the wizard function for declarations. We look at the declaration there and see whether it's a function declaration, and if it is, and the name for this declaration is not in the list of functions that we found in the source code, then we'll just prefix it with the Python plus C statement so that when TFI again parses the source code, it will know what to do with these functions. Okay, this was the last example that I wanted to show you. So to sum up, I want to talk quickly about some of the drawbacks that this approach might have if you're used to other approaches. And one of the main drawbacks is probably that if you use this code as I've shown it to you, if your C code does something bad and tries to access the null pointer, for example, then it will also crash in the test process because the code actually runs in the same process. There are no boundaries between it, so when your C code destroys something, your test will crash. You won't get any nice error reports, and you might not like that. So one solution to that problem would be to run each test case in a separate process and have one main process collect all the results. Then if one test crashes, it just crashes the single test case. The main process can still report on the errors, and all your other test cases will continue to run. This might add a little overhead, of course, because now you have multiple processes running that, yeah, need some more computing time, but at the same time, you can also run your tests in parallel. So if you've got multiple cores, it might actually be faster in the end than running everything in serial. Another big problem might be that debugging your test cases gets harder now because you've got a Python process that calls some C functions that, again, might call some Python functions, and where really do you debug that? You can attach a debugger to your Python test cases, but that won't help you much once you enter C-land. You won't see what the C code does there, or you can attach a C-level debugger, so you can see what your test or what the implementation does, but then you have to deal with all the C calls that are done by the Python interpreter internally, and that you need to skip somehow. So it would be nice, of course, to have some maybe better integrated solution here, some combination of two debuggers, one for the Python side, one for the C side. Let's smoothly hand over control once you enter the other part. Or one could also argue that since we are talking about unit tests here, if you really need to debug your unit tests, maybe you could also think about simplifying your code, simplifying your unit tests, or even the implementation so that you don't need to debug them in order to find a problem, but so that you've got unit tests that really can tell you where the problem is when something breaks. But to end on a positive note, if you're going to remember something from this talk, I'd like you to remember that writing the test cases is really simple. And no matter how complex your C code looks like, you've seen all the examples that I've shown you, the test cases look pretty much the same, because all the complexity that you need to care about is hidden inside CFFI and the wrapper code that I've shown you here. As a test case author, you don't really need to deal with that. You just can concentrate on writing your test cases, and you need to solve the hard parts only once, have it in a generic part of the code, and never look at that again as long as it works. So thank you for your attention. Any questions? Can you run tests from Python on a compiled library, for example, from a built binary code? Can you import that in CFFI? Yeah, that is one of the main use cases actually for CFFI, that you can interface from Python to existing libraries so that you can build a nice Python interface for libraries that already exist without needing to reinvent them. So that's of course possible. This approach was more meant to test the source code so it passes in the source code, not a library, but of course you can also tell it here, use the existing library. You could probably also do this, could you do this trick with mocking? Like put alternative function or function definitions in a loaded shared object or something like that? Do you think that this is possible? Well, that depends if the function that you want to mock is not part of the library, but would be part of another library, and you don't link against that library, then it should be possible because then you have to insert your own implementation of that function anyway for it to compile. But if you want to mock a function that's part of the library that you want to test and it's implemented in there, you can't really replace it because it's part of the same binary and the code will just call the function in there, you can't really take it out and insert another implementation there. So you cannot switch out the binary code. Okay. If I would refactor my C code, say change the name of the function as a signature and forgot to add up my test, how easy it is to spot the mismatch? Do I get the proper error message or does it just crash? No, CFFI will tell you if you want to call a function that doesn't exist, that well, there's no such attribute on the module, you'll get the usual error codes for that. If you change the type, it probably depends a bit on how compatible the old type is to the new type. If you maybe change an int to float or something like that, you might even need not need to adapt your test cases, even if you pass in an int, CFFI will just convert that to float value then for your call. If I would use a different struct name or so, it would detect that. If you change the names so that they're not compatible, you get an error message. If you have on the other hand a structure that's completely different, a completely different name, but has the same types in there, then you probably won't notice if you change the complex number structure, for example, and just switch the order of the fields, you won't notice that when you pass in the parameters, you will only notice that when you test for the assertions in the end. No more questions? Yes? Can the CFFI module can be also used for the C++ code? For what, please? C++. For C++, I think it's not completely supported, no, but there's the main CFFI developer, Armin, and one of the four girls. You can ask him about new features. Short answer is no. Okay, thank you for your attention and thank you, Alexander, again.
Alexander Steffen - Writing unit tests for C code in Python There are many unit testing frameworks for C out there, but most of them require you to write your tests in C (or C++). While there might be good reasons to keep your implementation in C (for example execution speed or resource consumption), those hardly apply to the tests. So wouldn't it be nice to use all the power of Python and its unit testing capabilities also for your C code? This talk will show you how to combine CFFI and pycparser to easily create Python unit tests for C code, without a single line of C anywhere in the test cases. It will also cover creating mock functions in Python, that can be used by the C code under test to hide external dependencies. Finally, we will look at some of the challenges you might face when trying to mix Python and C and what to do about them.
10.5446/21091 (DOI)
Let's go. Okay, so we're gonna talk about service discovery today and We'll focus on the The client side how do you use service discovery and Python? and I won't argue if you should or not use this service discovery in this talk I won't explain how to install the three technologies that we I will cover here I will just focus on their usage and If we have time which I hope we will have I'm crazy enough to have done a live demo So we will try it It's an opinionated torque, okay, so That's my point of view here Short introduction about me you can find me on the nick ultra bug Gentoo Linux developer where I work mostly on cluster stuff and Python stuff. I maintain packages Related to no sequel kill the key value stores or a message to him I'm also CTO and number Lee We are programmatic and data driven marketing and advertising company We have a booth over there with a quiz and you can win some crazy stuff So just come around and you can have a talk Okay, so what is service discovery? To make it short you can compare it to what DNS is for your browser But in a dynamic way when you connect to a website your browser First half to find out the IP address of the host hosting the website you want to reach and to do so It does a DNS query Before hand when you're you own the website the web service And the website you had to configure the DNS and Register inside the IP address of the server of your server Service discovery is about the same thing. It's about registering and querying but for service That's the basic of it Let's see a bit more about it. So We have a catalog that's provided by the service discovery technologies and Then you have your servers Each of them provide a service some of them provide the same service They will register themselves into the catalog and So you will get a list of service is running at host and port multiple times if the service is running on multiple servers Then you have clients The clients will be looking for a service by its name usually and they will query the catalog For the given service and they will be handed over a list of available hosts providing set service This is service discovery Now let's take a quick tour of the three technologies I will cover here The first one is the oldest one. It's named zookeeper. It's from the Hapash Foundation and Zookeeper is Firstly designed as a reliable cluster coordination. It's used mostly and mainly in Hadoop It has some pretty interesting features and it's mature since it's the oldest of the three technologies we cover here When I say in the negative points that it doesn't provide service discovery per se, that's true But we'll get back to it later as to how we can still use zookeeper To to you to achieve service discovery what I mean by this is that it's not a built-in feature of zookeeper The main design of zookeeper and it's the same for ETCD Which we'll see just after this is that you can compare them to a distributed hierarchical file system Which is also comparable to a key value store. You'll see about it and It's pretty Java and It uses a special implementation for consensus algorithm The consensus is about making sure all the nodes of the zookeeper cluster agree on something The Python C bindings are not usable There is one provided here on the on the sources, but it's not really usable and even worse for service discovery It's not a data center aware Technology it just knows about its own cluster Now you have ETCD, ETCD is from the CoreS guys It's pretty recent project. It's written in Go. It uses the raft consensus Which is pretty robust It has a good adoption. It's used on many big bigger projects Like Kubernetes and it provides an HTTP API to do older queries and registration stuff It's pretty simple really simple to implement and configure Just like zookeeper it doesn't provide per se a service discovery mechanism But we will use the file system hierarchy to achieve this It's not that a centerware either and it doesn't provide any kind of health checking of your services once you register. We'll see about it later as well The third one is console is from HashiCorp. That's the newest of the three It's also written in Go. It's also using the raft consensus algorithm and Yeah, I told you it's an opinionated torque. So I didn't find any bad things to say about it Because it has built-in service discovery Feature it's data center aware so you can have multiple clusters of consoles in each data center and they can talk about them between them and It also provides a DNS API so you can also look up for services using DNS Which is kind of a good feature The note I wanted to stress out on zookeeper and ETCD is that we will achieve services discovery by abusing the key value store You can see the key value store are the sort of file system where you can sort store data Registering is about creating a node or a folder or a file if you want to relate it to your local file system and Make it meaningful In this kind of example at the root of the hierarchy I will say okay the first level is will be my service name API X Then on the second level I will create a folder which will represent all the service all the servers providing this service So I call this folder providers and then inside I will create nodes or you can relate it to files Which are named my host two points and the port so discovering providers for API X service is just like Listing the content of API X slash providers directory fine We can do the same with memcache and stuff like that. That's how you can abuse and Achieve at service discovery using key value store based technologies such as the keeper and ETCD Okay Okay, now let's see the Python Client's library to talk to each of those technologies the first one in zoo keeper is Kizoo ZZK and Yeah, I know I'm sorry about this. We can be very very creative community. I know and We'll use the underscore line One ZZK which under like underneath uses Kizoo So you can see a rapper ZZK as a service discovery oriented wrap wrapper of of Kizoo So it's pretty pretty handy Then for a TCD we have some standard Python dash ETCD library, which is pretty good If you use async.io you have another for a think I have stuff as well and for console You have consulate and Python console We'll use Python console which is more Now documented and more active than consulate last year it was the contrary but This year Python console is very very nicely implemented now. So good job guys. Thank you Okay When you choose a technology you have to rely on it even more when it will be the core of your whole topology and You have to make sure that you can rely on the Python clients Because they really really have a direct impact on your application So let's see about the ZZK client library which uses Kizoo When you wanted to connect to a zookeeper cluster you can specify multiple hosts Which is pretty cool It has rotary connect feature you can query about the connection state you will get connected or Disconnected and stuff like that so you can have your code Handle this gracefully And it has rich accept exceptions if something wrongs happen. So I'm providing a quick example here The don't fail on connect means if No server is available when I do the first line and wait and try to connect to to my zookeeper cluster Will it be blocking will it raise an exception in this case? It's blocking If you and you can change this with the weight Parameter, but it will raise an exception Okay, so need so it's not for the one of you who are used to the Python memcached library You have to know about this and handle it Because it's can block your whole application if no zookeeper server is up On the ETCD side Python and ETCD side You don't have the possibility to connect to multiple hosts But you have a total reconnection gracefully, so it's pretty good. You can't really Try and get the connection states the exception as are pretty rich So you can see what's happening pretty easily and catch the good exceptions about the different kind of errors that you can happen to to be running into and It does fail on connect The Python console one is Well Not so good as this ZCZK one as well because doesn't support multiple hosts either He tasks or reconnect feature auto reconnect feature you The exceptions are so so I'm providing an example here connection error is well sometime not very very meaningful But it doesn't fail on connect that means it's non-blocking you just Create your console Client and then continue on Nothing happens when you do that Which can be a good feature Okay, now about the service registration There are three things you have to consider here three states of a service life cycle is getting up and he needs to register into the catalog Then it's running and you have to make sure it's still running Because if it's not running it crashes or your server Providing set service becomes unavailable. You don't want to answer clients about it Okay, so you have to remove it in a way from the catalog when it's down That's the dynamic part and Then if we stop gracefully of or if we crash we have to derogist it from the catalog So the health checking will also do the derogistration for you in in case of failure we'll see How it's done on every Python implementation For the CZC ZK It's pretty straightforward the main line The main thing to understand is the first one over here and the first try accept will just create The file system hierarchy. I talked to you about so we just make sure that we have the slash EP 2016 providers and we do a make path which will create the whole path like MK dear dash P and If the node already exists, it's okay. We can we can continue then the ZCZK provides Cool method which is register and then you say okay on this node on the provider's node I will register a machine named Yaz running on power port 5000 And it will create the the file like node like he has two points 5000 for you Okay That's all we have to do Now about health checking The health checking in zookeeper is implicit because the keeper has this cool feature named ephemeral nodes ephemeral nodes. It just like That they are like Files or nodes in the file system hierarchy that are present on the file system as long as the As long as the session of the client who created them is alive so whenever you the client dies or Closes its session zookeeper will know about it and we remove the given node Automatically So it's a good way of doing health checking because if your application crashes or You want just to do register you just have to Exit gracefully and close your session by closing the session to zookeeper zookeeper will remove all the nodes you created with this application So the register thing does that it creates an ephemeral node So that's implicit in the ZCZK Python client What about the failure detection latency if my program is Kill-9 or crashes badly and didn't have time to register gracefully How long will it take for zookeeper to remove the node from the hierarchy and then In other words, how long will I really take for the clients to not be serve? my host and port anymore it Will take session time out here when I created my client session I said five so it will take up to five seconds in this case To make this happen So for five seconds maximum, I could be serving wrong host and port to my clients from the catalog That's something you have to consider as well in such topologies on It's basically the same principle We try to read the provider if we can't find it recreate it as a directory Then we just have to write there is no register wrapper or something like this So you we just have to write the given Node I'll talk to you about here and we can set the data in it So we put also the same thing in the value. It's not a directory and it has a time to leave TTL Which I've talked to you right now That's the health checking actually You can see that it's coming difficult here, why? Because it is he doesn't have the concept of Ephemeral nodes as the keeper has that means that you have it to implement health checking yourself or use a third-party library or program to do it for you But you have to do it yourself. So in this example, I'm doing it myself So the trick I'm using is have is that when my application start I have to create a health finger thread Which will constantly and in infinite loop register my service and that will be a sort of heartbeat or health checking stuff with a TTL and then my TTL the time to leave of the node. I'm creating it will be removed after X second TTL seconds from the hierarchy. So my fellow detection latency is TTL But I have to have a thread constantly Making sure that my node is present and so my service and server is in the catalog Okay If you use console Everything is granted and built in so you can see in the code that's it's pretty straightforward and I just have to Register my service into a console agent which is as well very self-explanatory The name of the service the address of the host providing it and the port is running on the console. It's integrated nothing more to have The health checking is interesting in console because you have a way To Make sure that the console servers will run some health checks of your service by themselves So you just have to create like in my example, it's an HTTP service So I'm creating an object a health check object which is of kind HTTP and I'm providing the URL that the console server should Call every two seconds. So I'm telling console a okay and when I register I pass the extra argument check and I I said to console. Okay. Check this URL every two seconds if it fails remove me from the catalog or To be very correct mark me as failing All right How do you discover all of this? It's pretty straightforward as well and And so I will just show you the querying part For zookeeper You can get the addresses by listing the children of the given node so I'm listing the children of the providers folder in EP 2016 and That will be my notes. I Just have to look over them split the two points and And I get the host and port of every server providing my service Okay Etcd basically the same stuff So you make a recursive query read you get the children and you split and you get your host and port On console It's also very easy you query the health service because you want only to get the healthy servers providing your service So that's the passing Equals through here. I just want you to return the service Where the health check is passing the servers and ports for which the server the health check is passing. Okay, and Then inside I get a lot of information It's a directory style thing and inside there is the host port and other stuff interesting stuff Sounds good, okay Now let's play so I have given three Raspberry pies and my machine here is running a zookeeper Etcd and console agent so The idea I had is to showcase a service discovery page like this Where we will be looking for the EP? 2016 host providing the We'll be looking for the host providing the EP 2016 Service so I Just wanted also to to demonstrate Yeah, okay, that's my Okay, it's here to demonstrate the key value Storage which he all those technologies are also used to configuration Access so you can store your configuration in this Key value stores so your application can also get them from from it. So the color here I don't know if it's really all Because of the resolution. Yeah Every time I reload this I Change the color Configured on each and in zookeeper in Etcd and in console for my for my web service so dick, can you start running your? your Raspberry Pi so raspberry pi 4 is this one that I plugged in a few seconds before and I can just go to it like this and You can see That every time that I will change the color on the key value store it will be picked up by the application from in this case zookeeper and then dick just plugged in the raspberry pi number one Which appeared and got discovered here by the server on every platform so if you can you also plug in yours and You too so we'll see the others coming and What's interesting? About this is like this Okay It's gonna get hot now. I think my raspberry pi 4 Gets a bit overloaded here. That's the Wi-Fi, but it's okay So it's time. I reload Okay, dear raspberry pi is running pretty awesome You see that my raspberry pi 4 which is not responding here You can see that it's not responding the health check failed for every one of them and that's it's has been removed from Zookeeper etcd and console So it's a good thing. Okay, it's working right Okay, so now we can see raspberry pi 2. Okay raspberry pi 4 is Getting back somehow You can't yeah, it's a bit is getting back Yeah, it's getting back. Okay Raspberry pi 3 on console. Yeah, it's working as well. Okay You can see the not yet the color now. All right Yeah So now we have the four raspberry pies happen running and They seem to be yeah pretty stable on the health check. Yeah, I Will remove I will disconnect raspberry pi 4 Now let's see about the time it takes it depends on the technology Because they have a different kind of TTL FMR node session timeout or health check timeout. Okay? All right, it's Yeah, some of them are overloaded. Do you have any question? Yeah, I don't know Yeah, yeah, yeah Yeah, the client decides so the question is if there any kind of balancing no the catalog in the when your client queries the catalog It gets a list of all the available nodes for the given service That's all that then it's up to you to decide to which one you want to connect Yeah, I have a question about redundancy if you have an application that is dependent on the service discovery catalog and For different services that that Exposes and the catalog for some reason crashes. How will you? Recover from that situation will you have like service discovery of the service discovery or? How would you do that? Yeah? No, you don't do services cover your services discovery the minimum that's Advise of servers is three so you should have at least three zookeeper or console servers running Okay, so if you want more resiliency make fuck make it five seven but an uneven number. Okay, always an even number If something very bad happens and you don't have service discovery anymore, I Guess you have to Handle it on your application side. You can make it like with cash caching stuff It's not it's not very easy and it really depends on the type of application you're running But the best best course is to make sure your services discovery cluster Has enough nodes to sustain this kind of problem Well, it depends on the technology actually as you saw if you use a keeper you can connect to multiple hosts So you don't need a load balancer just but every of the node in this on the other hand in Console and each cd you have to specify one of those on one of the nodes So maybe you can implement some kind of stuff on your application to handle this Like having a dq or something like this in python and try again in each Exception if you it's it's raises an exception you can try and connect to the other host etc etc Yeah I think for the recording sorry a Question about registration procedure. Yeah, why don't you want to use external tool to do this it can be Implemented in configuration management chief puppet salt And in this case you will have possibility to register Sort by the services like MongoDB and so on automatically So That's question why not do this as external service for your application? well Hmm, I think To me chef and stuff like that are good for provisioning or configuration really configuration Applying configuration to servers. I don't see service discovery like this to me is I relate to your point with MongoDB and and and Demand stuff like this You have external programs that do it for you I'm not sure that For instance chef etc can have have checks running on So if you make with with ETCD it may become difficult to do There are a lot of third parties libraries doing it for ETCD for example because it has a wide audience and For containers stuff like this They they use specific third party tools, but not provisioning tools Yeah, we are using register container that register container that automatically registers any container that is running on Docker host and it's it's very Good idea. I think it's yeah, it works well and it's if something happens with container it will be there just automatically Yeah, but it has to be registered somewhere anyway Yeah, we are running local agent on each host machine local consulate agent and each service knows that it can find agent on local host and Engine only agent knows where there is a console cluster located It's can be implemented in very easy way. Yeah, but you don't have Central configuration place or you you do it also in them We are using salt to to install everything, but actually a service discovery is Implemented using special containers. Okay No other question Well, thank you. I
Alexys Jacob - Using Service Discovery to build dynamic python applications Let's compare the usage of three major **service discovery** technologies to build a dynamic and distributed python application ! This talk will be about **consul**, **etcd** and **zookeeper** and their python bindings and will feature code along with a live demo. ----- This talk will **showcase and compare** three Service Discovery technologies and their usage to **build a dynamic and distributed python application** : - consul - etcd - zookeeper After a short introduction to service discovery, we will **iterate and compare** how we can address the concrete and somewhat complex design of our python application using each technology. We'll then be able to discuss their strengths, weaknesses and python bindings and finally showcase the application in a demo.
10.5446/21092 (DOI)
So I guess I'll have to ask someone I know to pass around the mic. Because as you may know or remember, this is an interactive talk. So we prepared some stuff to follow around, but we are really hoping to have a discussion. So it's a sharing experience. I guess it should almost be time to start. I don't know. All right. I guess you will agree that we'll now proceed. Okay. So the topic of this talk, as I just told earlier, is an interactive talk. So we're really hoping to share our experience. Why did we want to make this talk? This is Ramnes, I'm Ultrabug, and we work at Numberly. That's where you can find us if you want later to discuss things with us. But to get back on the title of this talk, it's about what happens when. She happens. The main thing in our daily job, we run some pretty heavy throughput. We have web services that gather data for our customers. And we can never be down. Down time is not acceptable. And losing data, which is another story, is not acceptable either. So we've developed over the years some kind of practical reactions. And we have learned to develop and design our infrastructure a bit differently. And we are still learning. That's why it's an interactive talk. Because we don't claim we have the answer for every use case. So we wanted to start with the basic stuff, which will lead, maybe, I hope, to the conversation we'll have. Let's take a simple example, which Guillaume will introduce you to. This is a very basic application like what you could have when you start a company or anything. So you have Enginix, who service GDP requests. You have a Flask application, who handles all the logic stuff. And you put all your data in MongoDB database, for example. And it could be any database. So first example is what happens when your database is down? So in our cases, we have multiple solutions. So, for example, the server could be burning. But if you have that, you can have just a replica set of databases. So if one burns, well, there are still two or three other databases that can take the lead. And, okay, you continue to serve requests. Something else that could happen is that you miss some resources. For example, you don't have RAM anymore. So if you don't have RAM anymore, well, you could trigger some automatic kills. Like with UisGi, for example, do that. You can just say in UisGi, okay, if that process takes more than, I don't know, like one gigabyte, okay, kill it. You could use C groups, like with dockers or anything, just to say, okay, this process just adds that amount of memory. If you don't have any disk anymore, like if the disk burns off, if you have big failures, what could help is a red1, red10, anything. Basically, never run in production, a web application on something that doesn't have a redvill. Another good thing you could have is a distributed file system, like NFS or anything. There is a lot of things you could have. This is a good idea for some use case. For some use case, sometimes it can add some other risk, but that's a choice to take. If you have a server overload, like the database can't handle any more requests because it's already like this full-sloads pool. There's not much you can do except monitoring it, so you know when it happens, and scaling it recently. You can add more servers, so you can handle more requests. If you have some other ideas or some remarks about that, don't hesitate to tell us about it. Like Alex said, it's really an interactive thing. While you get the microphone, I'd like you to raise your hand if your backend database server already crashed your web service applications. I guess you all have experience in these fields. Like I said, we prepared basic stuff like this. We'll get deeper and deeper in between the talk. Hi. I think we all shared the experience. My question would be why don't you use or didn't use any of the standard tools or solutions for those kind of problems? For example, for the list here, Mezos seems to be a good solution. I can answer that. For complexity sake, who doesn't know Mezos? Okay. We lost already a lot of the audience. Just to get back about what it is, and correct me if I'm saying it wrong, it's a cluster service-oriented solution with resource management. So that it can spawn resource somewhere and spawn it somewhere else if the given server was running accidentally dies. But setting up Mezos and managing it is an overhead that you may or may not want to have. Kubernetes is also the same kind of thing by Google. Google platform runs on Kubernetes and it's also maybe a good solution. It depends on the architecture. Here, we took a basic example with no automation whatsoever. Because also we believe that sometimes simplicity is and built-in features of the technology we use are based on a response to making a bigger infrastructure and adding again complexity. Maybe you can save complexity by using right technologies or technologies who under failure in the right way. Also, we won't talk about Mezos or Kubernetes in this talk, but this is really the first example. In the next example, we'll go on bigger architectures. Yeah, so it is my experience that I heard a lot of similar responses from different teams. The thing is sooner or later, they end with a lot of moving parts. Sometimes it's perhaps cheaper to just use something and invest like a week or two instead of having to answer the phone at 3 AM. Yeah, like I said, it really depends on your team and the size of your team or your company. But I really agree with you. I just wanted to say, please don't call plain NFS a distributed file system. You're talking like Gloucester or it will burn you. Yeah, you're right. When we wrote a distributed file system, we had more in mind HDFS, which we use intensively. Okay, in my experience, it's not very hard to avoid hardware failures. We have replication, we have master-slave, we could backup our data. But it's very hard to recover logical failures when we logically corrupt our database or corrupt our MongoDB database and how we could avoid this. Yeah, we'll cover maybe deeper examples that relate to the problem you're talking about. I agree with you. That's only pure hardware failure. Any other hardware failure experience? Hi, so I forgot to mention that mostly with this kind of home-grown solutions, I noticed that they end up with a much more complicated architecture. For example, if you would want to somehow make out of this technology stack some failsafe architecture, then in my experience, teams have ended with multi-master, highly complex MariaDB clusters and whatever. And the solution is simply use salary, use blah framework, just do it. Yeah, we'll get some of those afterwards. You're right. Let's continue. I don't know if I'll be contributing much, but just an anecdote about hardware failures. On this one project that I was only briefly, we had this big data center in Verizon or Amazon or something, but it was in one place in the world and tsunami hit. We'll talk about this later. No, no, no, keep on. Then we thought, yeah, we have to have another one on the other coast of US. Sure. Of course, we'll get to that as well. You just want to see me walking. Yeah. That's because you said you were tired earlier. No, actually, I was... No, actually, I was a little bit late because I was stuck in the EPS meeting. Sorry. These are all server things, but if you have servers, just servers, you're not reachable, so the network is missing. And network is a big problem as well. Thank you. Also, hardware can fail there. Yeah. Very barely. Yeah. That's another possibility. Unreachable backends. Indeed. That's maybe what occurs most than a server burning, a burning server, actually. The first thing that comes to my mind with unreachable backends is a CIS admin guy who tripped over the cables. True story. I'm sorry. Not me, man. Anyway. The first thing is you have to make him remember. That's human behavior. So maybe find a forefeet for it, an Hello Kitty keyboard for one week, whatever you want, but you have to make him remember. On the hardware side, you can handle also switching and switch failure. The easy answer to this on Linux, for instance, but it also works on Windows, is use network bonding. Now, when you buy a server, they have at least one network card with two ports, use those two ports to and plug them to two different switches. It's really easy to do. When you have a really, really network people, you can do LACP, which is a higher but more resilient and more robust way to do the same thing, aggregating two ports and adding up their bandwidth while adding full tolerance to your networking. That's the principle. Do you have any sharing knowledge about switch or unreachable things? Yeah. Yeah, hi. Is anybody using hardware anymore? Is not everyone running in the cloud or using virtual machines? And you're running it yourself? Yep. Okay, just asking. So, yeah, we do it ourselves. So, yes, we buy everything, we host everything ourselves. And so we have to take care of these kind of problems. And we use GEN2 in production. Yeah, we use GEN2 Linux in production, which maybe a lot of you haven't heard about. We are some kind of crazy people. When I say we're used to shit handling, maybe it's part true. Any other thing to share about network resiliency? Okay. Now let's get a bit deeper in the stack. Having a fail-proof stack can also help when it's not about only the hardware part. On NGINX, there are two things I like to use mostly, is that in NGINX, you can handle back-end HTTP errors. Your upstream gets back to you with a 500 error. What do you do? Do you pass back this 500 error to your client? Or do you try to handle it nicely? I'll show an example of this. If you don't know about this, it's called name location in NGINX. We use this a lot. So when something bad happens, you can see on the bottom error page, whatever it is. We will change the error code to 200 to mask it for the user, while still serving some kind of pixel, because this is a pixel service. And we can even handle if there was a redirect get parameter in the URL, we can still redirect the user to the correct page, even if our back-end didn't or made something terrible. So that's a kind of little trick, location and error page handling. It can really save you from facing, hey, 500 error calls from your clients. We use it quite a lot. You can also serve from cache. So NGINX has cache capabilities. You can say, okay, if I get an error code from my back-end, I will just serve a stale cache response. It's pretty handy as well. On your Flask application, usually you can also use stale caching, which can be handy if your database is down as well. You can have some answers in cache and serve from stale cache. It's better to answer something than an error code. And then you can have multiple techniques to not lose data. This is more focused on not losing data. Spooling and TaskDefero in the basic way is the way that you get some data from your HTTP call. And this data is very important to you. You don't want to be asking your client to send this data twice. Even more when, in our case, it's navigation data. So it's a browser and user browsing a website data. We can have this data back. Spooling it means that whenever we have it, we're not forced to immediately insert it in database. We can take this data, write it somewhere on disk, and have another process be fitted with this data and insert it in a safe way. So if your backend is down, it just can try and try over and over inserting this data, while it was a long time ago since you responded to the client. That's the feral. There are also message queuing technologies such as, maybe you heard about it already here, 0MQ, RabbitMQ, which is more resilient, and stuff like that that can help you get data and make him into a task. That's also the salary philosophy, which is using RabbitMQ as a message broker. The important thing here to me and to us is don't send back error codes to your clients, even if you have, unless you really have to, depends on what you're doing, but you can handle them even on higher levels of your infrastructure. And don't lose data. Don't ask your clients to send again this data. You have ways and means to handle these kind of failures as well and to not ask for it. Do any of you use any of those techniques? Two, three, four, what techniques do you use? Hi, I used to work for a WordPress hosting company and a lot of what we did was basically rely on the reverse HTTP cache to... A lot of the content being served is actually just static content in a way. Think of a lot of people running basically websites. Those glorified blogs are basically just static content after a while. And then the back ends could fail all the time and customers would never notice if you serve from cache. Everyone's happy, the front page is up, the main articles are up. A lot of things are available, especially when your website is basically a content publishing platform, because that content doesn't actually change that much, it's not very dynamic. It works very well. You don't have to wake up every five minutes in the middle of the night drinking out as you can sleep through it and everything's fine. No one will notice except the people trying to publish an article. If it's something really urgent, then they will complain. Any other users who want to share their experience or what they are using it for? Yes, to complete this thing, even on websites like e-commerce, you can use similar techniques, but you need a database to insert the orders or stuff like this because 95% of the content is static, so you can have something like varnish, serve the static content, then use some tiny JavaScript to get the little tiny parts specific to the user, like the username, the name, the basket, etc. I've seen it used to lighten a lot the charge on the back ends. It's very effective and even if you have one or two minutes of downtime for your back ends, your user can still navigate the websites, see all the products, and maybe by the time they add two cards, the back end will be back up and you won't lose any money. Yeah, yeah, yeah. Anyone else? Yeah, I guess the conclusion here is it's better to run even a degraded version of your website or whatever service it is you run than having it fully down. It depends on the use cases. Yeah, it can be argued. You want to argue it? Come on, we are here for it. I want to hear the counterpoint. For example, if you charge money from clients, it's better to say, I cannot then take money after several hours. I guess even that can be argued. So the next thing you can do is of course clustering your application. So if one of your back ends is down or one of your database is down, well, it's still working. So the bad thing is even with a lot of balancers, there's still a single point of failure. So you can always get more redundancy. Even if you have two load balancers and two absences, then the whole data center can go down. So you have to get another data center. So it's kind of an affin loop, but redundancy is good. Okay, so now we can get to your point where your data center burns. Yeah, this photo looks pretty bad. I don't know if it was photoshopped or if it's an actual photo, but I was like, oh my god. I don't want to be the C-Subs coming back after the fire alarm in the data center room. On the upside, actually, it's pretty simple. Have multiple data centers if you run them yourself. If you use the cloud, like it's been suggested, in Amazon you have this notion of availability zone that you should use. Make sure you do remote backups, whatever you do, and test them. In France, we had a recent story where a big company lost its customers' data and they found that they had backups because they were using backups and remote backups, and when they tried to get them back up, it could be said, it failed. There again, I don't want to be the C-Subs over there, and you don't want to, I guess. On the IP routine and connectivity stuff, you have the GP, anycast stuff for having a single IP address accessible all over the world. Something I appreciate also is DNS health checking. For this, we use Route 53 on AWS. Who knows about Route 53? Okay, not so much. It's DNS service from AWS where basically you can have geo-distribution-based DNS responses and add to those DNS records the health checking. So if your data center or whatever happens is down, one of your IP to your web services is down, it will not be answered from DNS queries anymore. It's pretty handy and cheap as well. On the application design, you have to think about geo-distributed applications. Who runs at least one geo-distributed service here? Okay, so I'm not talking about too much people, but still, it's a very interesting thing to do. As a developer, it's a real challenge, as an ops, it's a real challenge. Even when you want this service or this kind of, when I say service, it can be a database service available all around the world. It's also a nice thing to try and achieve. Anyone had this kind of problem already? Where they were relying on everything in one place? Yeah? What happened to you? On the whole data center? Yeah, so obviously I'm not an administrator of the network of some kind, but I was, I seen this all. So my main service was located in one data center and it failed power. And it ended up just in four hours of outage, complete, nothing more. Crucial infrastructure was located there. So we just dialed up our clients and said we're sorry. Afterwards, we apparently distributed. Yeah. What time did it take to distribute the whole thing? I would have to ask my administrators, but I know that certain steps were carried out. Yeah. Just add a little bit because just how easy terrible stuff can happen to a data center, especially if it's not like a big company, the small data center or service provider having a small hosting area, because I used to work in the kind of the same environment. And basically so many things killed the world. We had a story, I mean, I won't name the company, but basically it happened overnight. And the night shift who was monitoring the object just everyone fell asleep suddenly. And they missed all the alarms. And basically when the morning shift came, like all the temperature in the server room where we had a lot of our customers hosting their services was like 70 degrees. We opened all windows and started just like, you know, to try to get somewhere there. But basically a lot of things can go horribly wrong. So choose your data centers carefully and try to really get more of them if it's possible. Yeah. Contracts with your providers. One more? Yeah, yeah, yeah. But I'm just really into the contracts to your providers are not enough usually. And I even provide us say like 99.9999% but not 100%. Yeah, yeah. It's a luckily this was a data center that was only used for the development. But we had an air conditioning that was running really hard and it leaked water into the power outlet that was behind the UPS. So no more uninterruptible power supply and what proved that it was interruptible. It was down for two days. Yeah, it was major, major problem. Yeah, you have to call your clients in the end. So I guess this is this must be very hard to explain. I don't want to be in the sales department at this time. The problem with geo separated distributed locations is not when it goes down. When things come up again. That's right. I've had a few times where services came back up and we had both of them active because they couldn't see each other. But the rest of the world could either see one or the other. Yeah. And then people start using it and when they when they see each other again, then one of them has to decide to be slave again. And weird things happen. Yeah, that's called the split brain situation where your brain doesn't know anymore because you had usually two peers. That's why in clustering in general and in everything you should do is that always be at uneven numbers. And you already know about the voting strategy. Okay. If I am in a disconnectic situation, who is down? I am or is my peer down? If you have only two peers, you have no way to know. You have at least to have three peers to be able to know. If you can't reach any of the two other peers, you're down. That's solid, pretty solid. It's not always solid, but it's pretty solid. At least always thinking uneven numbers, always, whatever you do. Yeah. Okay. So, the co-terror is great, but sometimes real problems are a bit more complicated. And it's not always like dev app stuff. It can be like really coming from your code. That's what we are going to see. So, one day I was walking like normally doing my stuff. And one of our market guys came and told me, hey, run as the client says you can authenticate on the server, on the website. Something's wrong. I was like, okay, I'm going to check the logs. This happened like maybe 10 times per day. So, okay, let's see maybe something's wrong. So, I set the machine. I look at the log and everything's okay. So, well, the client is wrong. What? What did I? So, yeah, the client must be wrong. So, he goes away and I'm happy. Something like one hour later, I'm still walking and the guy come back and tell me that it's still not working for the client. So, I'm exhausted. All right, I'll check the code. Something's wrong. Then I look at my application and I see that. Does anything see something wrong? So, after 30 seconds, I notice that the same email function can fail. So, if the email function fail, well, it returns. Okay, it works. So, yeah, my conclusion to that story is that you have to know your code. Interest situation is great, but code can fail too. Even if you don't like the guy who wrote the code, even if you don't understand the code, if you're a maintainer of something, you have to understand what you're doing and you have to re-factorize when needed. So, the code should never pass suddenly. That's from the Zenf Python. Well, yeah, don't always blame the guy. Sometimes it's easy, like, okay, that's not my fault. It might be another server thing. So, that's why the devil thing is great. So, you can really understand what's happening on your server, even if you're just a developer at the origin. And the other way I run this is true too. So, do any of you have similar situation? What kind of really weird things happen? Okay, now it's going to be brave for developers to raise their hands. I know. I had a city situation where a similar thing where they're saying, oh, this isn't working for the client. She's trying to do all these things. She had a really odd workflow. So, I was thinking, this is all working, all the tests are passing. I go into the website and I'm looking and thinking, this is all working fine. And I ran all the tests and it's all working fine. And what I didn't really realize, it took me like a week to realize where she kept on coming back. The point where I'm like, I use no script. So, I'm happy using the HTML back end and everything so I can find. What I didn't realize was if you enable the JavaScript, JavaScript uses a different API and that's the thing causing the problem. So, make sure to eat your own dog food and use your own API. That's like that got, yeah, it looked like it was working, but I didn't write the code, so it's fine. Yeah, but in the end, you were responsible. Yeah, that's, yeah. So, in Python, we get used to the libraries we're using, raising exceptions. A really common one that doesn't is Memcache. Pretty much every Memcache library will return zero instead of raising exception. So, you need to wrap it or do something like that, but there's four or five places I can think of in different projects that we've been working on where we trace back something to like, why isn't anything working? It's because we think Memcache is working when it's not. Yeah, I tend to like the Memcache Python library because of this, but sometimes it can be in the nightmare, yeah. So, you have always to check about the, it's like the goal. It's like a goal. You have to check the year, you return of the, of the, of the operations you do. Any brave, any other brave developer want to share about this? Oh, yes. We have one here. That's it. That's it through. Yeah, so my example is not related to Python really, but to PHP. Yeah, and now. Who thinks it happens more in PHP? That's brave developers. For my defense, I am not the root decode, but there is a very nasty thing when you try to auto load some file class and you have a syntax error. Then if you do not handle this properly, then PHP dies, returns, and the web server returns blank page with 200. Okay. And, okay, no other way to debug this issue. That's a nice one. We ended up at the WordPress hosting company when I work, we ended up writing some code in the reverse proxy that would detect these sort of situations, the white pages and alert us to it just because it's such a stupid default. Why would you return it to 100 when something's wrong? And yeah, it's horrible to monitor for that. I have to mention Python break, break itself this rule. For example, he's out of height, but exception. And sometimes it's can make very strange things. Switch to Python street. One thing that's not related to Python or any programming language really. I was having a server with a pretty large disk in it. And there were very, very, very many files on that. And then suddenly a developer called in and said, hey, I think the disk is full. So I go look, do a DF. And he says, well, I can't write any files anymore. Okay, touch fall. This full. Yeah, it ran out of our notes. That's something. Yeah, that's a nasty one. That's a nasty one. We often overlook here. Absolutely right. And there are five systems who don't rely on I-nodes. So when you know that your application might spawn a lot of files, think about them. Indeed. I have another story that I forgot to put in the presentation. So I'm going to tell it right now. Basically in my old company, like it was a small startup before I worked on Numboli. So we were trying to get things fast. So basically our web server was running inside a T-max. And sometime when we looked at logs, we were like just crawling on T-max. And one day we were like, oh my God, the web server is not running anymore. And actually it was just T-max. When you scroll, it sent a pose, it sent pose to your application. So the application was down just because T-max was trying, we were trying to take the log with T-max. Don't run T-max in production. And about DevOps philosophy, I don't know what kind of objective we can have. Who works as a DevOps or in DevOps-minded company? I see, I don't know if you're waving to say hello or... Okay. So it's amazing that you work in a DevOps company. Just wait, I come with a microphone. Because we can understand you in the back. Yeah. Why is not the delivery of that? Yeah, we should have thought about that probably a little bit early. The DevOps question is hard because when your managers and everybody talks a lot about DevOps, but they hire a guy who is a DevOps. As a DevOps position, then it gets tricky. So you get back to the silo. So we are developers and the DevOps. So yeah, back to developers and Sysadmins. I was actually a developer who had to run back to our admins to check up why the fuck is Docker not working again. Oh, the elastic search containers clustered with each other. Oh, interesting. So in that sense, I was a DevOps because I needed to worry about the code and about infrastructure mess ups. So it's a tricky word. Yeah, it has a different depth depending on where you stand. Which lead me to a question, who runs Docker in production? And can you share some experience with it? I'm interested. And I'm more interested in when it's failing, obviously. Just one thing, we were actually doing just this new project. So it was more of a proof of concept, but we've already started to get it out to customers. And we were working together with this consultancy who told us how to do Docker and Cloud Foundry if someone knows Cloud Foundry. So our whole infrastructure, we would provide services for Cloud Foundry based on Docker. So we know like spawn, radices, elastic searches, stuff like that. But the Docker cluster was actually one machine with all the containers for all services. So don't do that. Oh, yeah. Okay. Thank you. So I can share a funny story where the Docker demon crashed on the CI server. So you can imagine having like 15 super highly paid developers who are just mixing the air. And it was also fun to debug because, you know, who would have thought of it? Yeah, but to get to the previous point, how much effort would it take to implement something like Supervisor or whatever process that would monitor the demon? Probably not much. Yeah, you're right. You're right. Sometimes we're on own barriers. And we, yeah, I really agree with you. Just about the DevOps thing. Yeah, it's kind of a buzzword, except especially for recruiters. But all we see it's that normally is really like not a single person being in DevOps, but really a team where you have, yeah, people who develops, people who make labs. That's the thing here. But just working together and understanding what the other is doing, it's just very important. And it's also a great time also to a developer to acquire and helping him understand what is not used to do. Another real world problem. Go ahead. Yeah. So we have our statistic on Grafana. So it's a very nice board. One day I was looking at Grafana and I saw the statistics. So it wasn't really important because it was really just the maximum processing time of one of our services. Basically, the average processing time was still very low. So we didn't really investigate that thing. It stayed there for like, I don't know, maybe two or three weeks or even more, maybe, I don't remember. But we never understood really what was happening with why the maximum processing time was so high. And then one day, boom, like what? So my first idea when I saw that the graph was going so low is, oh my God, the service is down. Actually, no, it was still running. But what was happening is, so I searched for like one or two understand what was, what happened. I ended talking with one of the most ops guy in the team and told me, hmm, that's strange because at that moment I deployed an Ansible playbook on one of those servers. And so we looked at the playbook. What was the difference? And the only difference was in the ETC host file. Basically, the NS server was like all the time queried at each database insertion. So sometimes it was just overloaded. So just putting the IPs of the database in the ETC host file of each machine fixed the trick. So, yeah, that was pretty weird. Go ahead. Paul. Sorry. Paul. Here. Paul. Sometimes some other stuff I have to sort of like. And it's not just resolving the database server. You'll be surprised to see how many code is reversing the DNS. So you have to not just forward, but also reverse is happening quite a lot. Yeah. That looks like this problem. To be honest, we felt pretty stupid with this one as well. And this one is pretty interesting because two days ago you made a presentation about using console. Yeah. Sounds very contradictory. Yeah, yeah. And the thing, the question is have you tried to put a local GNS cache, I don't know, a bind 9, something like that, and a TTL around 30 seconds or minutes, something like that. Would it make the trick? Yes, absolutely. What's embarrassing with this is that we also lacked consistency in what we do. You know, on another type of infrastructure, we have local GNS cache, but there we didn't have it. And when Guillaume says the people were working on the Uncivil Playbook is also to stop normalizing all of these. So yeah. So maybe we think that we have something in production and it's running for so long that nothing can happen to it. And we tend maybe sometimes to forget about its resiliency or performance or just applying the latest of your knowledge, just for the sake that it's running, I don't care, or I don't need to bother so much unless something weird happens. In this case, it was good news. You know, we were satisfied with this shitty processing time. But on another type of application, it might be not so. I think that one good trick is to always profile your applications at least once. And I recently used VMPro from PyPyGuys and it actually just slowed down my service at around 5%. So it's actually viable to just switch one instance and check what actually your code is doing. Actually, in that situation, I profiled the code and I didn't have the same results as Synod's Garfan as others. That's why I was like, what the fuck? It's not working as it should. And that's why it wasn't really important. So like I said, we just let it go. Well, it was a good surprise when it was fixed. We came up only with embarrassing examples, so you feel more comfortable sharing this. I have a comment regarding performance monitoring tools. You really have to configure it properly. We had a situation that response time was between 30 and 60 seconds and it was caused by uploading files. For example, 5GB files was uploaded by a few people and it increased the average response time. It was the same kind of problem. Sometimes our metrics ever goes down and then we think, oh my god, my application is done. But it's running. Yeah. Who is using, who is not using a metric system? Who doesn't do metrics on their applications? Who doesn't have this kind of graph? Nobody. You all have it. Yeah, I see you again waving people. Question. Yeah, yeah. You can't just stand in front of the camera and hold your hands. Question, I was precisely going to ask this tension of this kind of question. Have you managed, Nien, to put in the graph, in Grafana some kind of percentile graphing so you can know 90%, 95% percentile and 99% percentile of the response time? So you all know that kind of problem. We have been trying, we have a hard problem with this. We are deploying Promethines plus Grafana and using that with elastic surgeon, we are having a lot of problems. We really calculate the 99% percentile, the 95% percentile. But have you managed to do this? Basically, we just show what we use in general is a comparison between the current day and the day seven days ago. So it gives a good idea to, is it normal or is it weird? And in using carbon and graphana for the visualization, you also have the annotation feature, which is good, where you can have a bar on your graph saying, okay, and you can plug it to your deployment or continuous delivery stuff. So you can have a bar on your graph saying, okay, from this point on, this is version 2.1. And then you can do also matrix comparison related to code deployment. It's pretty good. In disaster recovery, it's also a good thing to have. So when you know them, you broke something and you can do the same with server provisioning and deployment. At this time, I added a new server. Maybe it has some weird side effects. For the percentiles, if you mentioned you've already got elastic search, if you don't have aggregates but have the actual requests logged there, you can just use Kibana because it has a really nice visualization. If I recall, it gives you the percentages, the percentiles as well. You should get that for free from Kibana. The answer is the problem is to combine it with graphana for the remote audience. Did anyone come with a question? I guess we have four minutes left. So open discussion now. Question. Okay. This is tricky. I'm looking to know if Edmai has some experience with trying to deploy a new version of your backend and only deploy it to, let's say, 5% of your users, try it out, see how it handles, and then go for 100%. Especially in GenX, if you do in GenX, that's really good. So progressive deployment. Anyone? Thank you. Yes, for the same company I was talking about a bit sooner about the e-commerce, we were always rolling up the traffic, but it was with H-Epoxy in front of... I think it was H-Epoxy, then Varnish, then Nginx in that order. And the new servers were rolling up 10% traffic on the new version. Then we had the software error monitoring and all the metrics on this server. We were checking that the response time was not doubling, et cetera. And after a few hours, a few more servers were joining. And at the end of the day, all the traffic was rolled up from the... That's one sort of question. Depending on your stack, we do a lower level of this. We run our Python using UwSGI. And in UwSGI, you have this feature where you have... It's called TouchChain Reload, where your workers are reloaded one by one after... And UwSGI will make sure that the one that is reloaded correctly before reloading the others. So it's a good fail-safe, low-level deployment trick. And on the side note, if you are really, really committed to trying canary releases, which is usually called canary releases when you put the bird and try the mine, if it's pushed through IQVIRNets that resolve this problem in a very reliable way. But it's obviously very... It may be too much complicated for your case, but it has exactly this kind of procedure. When you say, I have a rolling deploy strategy, when I want to keep an number of pots, which is your application deployed, and in an hour, would it begin to increase the number? Yeah. Kubernetes... This feature in Kubernetes is very nice. But Kubernetes still doesn't have Elf Check, right? Kubernetes still doesn't have Elf Check, right? Yeah. Elf Check. Yeah. Oh. Okay. So yes, it has. Okay. Thank you. So that was one of the really bad things that I would add us to the table. Okay. Great. Readiness check. That's right. Thank you, Paul. Yes. I want, especially thank you, because this is like an interactive format, and it's a little experiment, just like to have, not to have only like one-sided talks, just like to have a, on the direction, I want to thank you for like taking a leap of faith in the first year we tried this. So I think please give these guys an extra hand, please. Thank you. Thank you very much. Yeah. Yeah. Lightning talks up at five. Thank you. Thank you.
Alexys Jacob/Guillaume Gelin - Planning for the worst Sharing our worst production experiences and the tricks, good practices and code we developed to address them. ----- This talk is about sharing our experience about how we handled production problems on all levels of our applications. We'll begin with common problems, errors and failures and dig on to more obscure ones while sharing concrete tips, good practices and code to address them ! This talk will make you feel the warmth of not being alone facing a problem :)
10.5446/21097 (DOI)
Okay, so let's welcome Andrej Komann and he'll talk to us about testing the web application with Selenium. Okay. That's working? Yep. Cool. So effectively testing, effectively test your web, Python, and Selenium. That's a mouthful, right? So what's this talk actually about? I've been working on a project for a pretty long time now, and I got to see, like, from its beginnings how the project itself evolved, like the code base, and how the tests around it evolved, because we reached some conclusions at some points in time, and we tried to improve what we had. And that's what I want to share with you guys, like, what we learned from that, and hoping that you don't make the same mistakes that we did. That's cool. Okay. So my name is Andrej Komann. I'm from Timishwara. That's a city in Romania, in the west of Romania. I work at 3peller for PBS. That's the public broadcast system in the US. You can find me at friends in my code on GitHub or follow me on Twitter. That's cool. How about you guys? How many of you have been working with Selenium? Well, quite a lot. How many of you are, like, QA, software engineering tests, something? Okay, a lot less. That's not a bad thing. So, well, let's get cracking then. Right? We kind of have, like, I like to think of it like the way the tests evolved in three phases, and we're going to take a look at each phase and see what we did right in each phase, what we did not so good, and how we try to iterate over that in the next phase. Okay? Cool. So, this is how a test looked like in, like, the first phase. Bear in mind that this has been, like, in the beginnings of the project where the QA team didn't have that much support for development because we were, like, crunching out to get features out. So this is more like what the tests look like. This created resource, thinkable resource, like a blog post or something. It gave it a title, right? Pretty innocent. And then we had a test that kind of used that particular resource. Nothing out of the ordinary till now. So let's take a look at how the page object model for Selenium looked like. So we were good kids, and we separated the Selenium interaction from the test itself. So we have, like, this page object model classes that model the page. And at the time we were using XPath. So to get the title, we were doing, like, getting the div and going by a specific ID, then, you know, go a level up and starts to get a little bit confusing, right? Cool. So what this test was hiding, actually, that at the time we were using NOS tests as our test runner, and NOS tests when it starts collecting your tests by default, it runs them in alphabetical order. So if C goes before V, you can use your test kind of like means to create the fixture and the state for the test on the server, and then test that in a different test. You can see how that kind of doesn't scale and contradicts some good practices of keeping tests separated and independent. And yep, this is the kind of the problem with the way we use Selenium. So we used the page object model. That was cool. But we had this really long XPath that makes my head hurt when I try to read it. And whenever a developer came in unknowingly and changed the way the HTML was structured for this specific page, well, that XPath, that test got broken inadvertently. So all these tests were running, like, on environment. We were giving it, hey, run this on production, run this on QA, run this on staging. But yeah, you can imagine that if the tests were failing like midway, it's not that cool that you left leftovers on production. Okay. So yeah, as I said, we wanted something better. We wanted to stop using tests as fixture generators. We want to move to something more robust in terms of identifying a Selenium element and interacting with it, getting its value or interacting with that element like it's a button click on it or something. So I think at the time we upgraded to Django 2014. And then we had live server test case. So we said, yeah, let's give that a shot. Why not? And then all the problems with setting up fixtures kind of magically went away because we were using Django ORN to create, like, the state we were going to test. And using tests to just do a targeted test to see, yeah, this is the title of the page where I can click on a specific element and the video plays or stuff like that. And we weren't stuck in that end-to-end test where you need to create something, maybe using the admin and then test it using the published state. So this was like a better world for us. We can run tests independently. We can parallelize, which we couldn't do without many headaches before. So this is good, right? So we also took note of how we were handling Selenium integration. And we started moving to something more specific. So we started hooking into IDs or CSS selectors. And that gave us also more readability into the code. Like, you know, it's going to be an ID called title. And it was also less brittle because once you moved, like, sections of HTML in your templates, it's not as likely to break the tests with that. Cool. So what's the problem with this kind of approach? It's not testing the real environment. Now, when you get that email in the middle of the day saying that, hey, we are a team that you're a client of, so like we have a team that manages RR image, ingestion, and resizing, right? And if they make a deploy, they kind of send us an email, hey, run some tests to make sure your production environment still works. Well, you can't really use these tests for that kind of thing because these tests spin up like a really bare bones environment. You just create just enough data to do your test in them. And afterwards, you throw them away. And they're not really designed to test the real environment. Whereas in the previous approach, we were actually testing like QA production and so on. Cool. This is a good thing that in the page object model, we started using IDs. But then again, we could do a bit better here, too. Okay. So what we had, what we wanted to do better after all these two phases were like separating long running tests from short tests. You still like to have end-to-end tests that go maybe to multiple parts of your application and test the workflow. But maybe something like if you have a blog and you just check that the specific page works, that's good enough for like a smoke test. Google does this. So in Google test software, they have like a section where they kind of chunk tests in medium or small, large, extra large. And small tests are the unit test. The developer runs on his machine. The large tests are more like these integration tests. And the next or large would be like an end-to-end test. So it's important to put a distinction between these and put time boxes in which a test with like all my large tests must run under five minutes. Running them independently, that's kind of a conflict between the first phase and second phase. We couldn't run them independently, but we could use an environment. In the second phase, we ran them independently, but we weren't hitting any real environment. So we started looking around. I mentioned we were using those tests, but PyTest was like the new kid on a block and made sense to try to look at what that has to offer. We put a limit on our Selenium suite. It has to run in under five minutes, five or 10 minutes. Previously our whole Selenium was running in, I think, 45 minutes. In the first iteration, so in the first phase, it was really hard to debug a test. So if you wanted to, if it failed in the view part, you had to go through all the creation and it was a very cumbersome and hard effort to debug. So also what we wanted to make an emphasis on is decoupling the test from the HTML structure of the page. And for this, we went a step further from hooking, so identifying elements from existing CSS selectors or IDs in the page to setting up a convention between the development team and the QA team, saying, hey, this is a prefix which all the identifiers, so if you have a class like Selenium-something that's reserved for testing, so for allowing testers to hook up to that, identify that element and do stuff with it. It's important not to tie any CSS or JavaScript functionality to that because you're just going to go loophole. It's important to keep those test hooks focused only on that. So how does the test look like nowadays? The page is pretty much the same except that we have this stuff like Selenium prefix for everything, so you don't go in and rely on IDs or CSS classes that will put in on developers and maybe make that specific element red or pop out or something. And also, we kept the same page object model. And that's pretty much it. And test-wise, because we switched to PyTest, we started using a couple of interesting plugins from the Mazele foundation. They open-sourced a few plugins that I'm going to showcase a bit later in the presentation, but they really helped us to ramp up on a new test suite. So one of the first plugins that I tried out was PyTest variables. This is something that you can put in to your project and keep your fixtures or your credentials or something like that in a JSON file and then pass that into the test. It kind of separates your fixture data, your credentials from the code itself. And it has a pretty simple interface. Pretty cool for the Mazele guys to open-source this. Otherwise, there's PyTest HTML. It's a really cool plugin that is not necessarily tied to the Selenium integration, but it provides you with a test report saying, hey, this test failed here, and these were the values that it failed at. And if you're using it in conjunction with the PyTest HTML, the PyTest Selenium plugin from the Mazele guys, it also hooks in and puts your screenshot from when the test failed. That's pretty cool. PyTest Selenium, this is the PyTest plugin that the Mazele Foundation open-sourced. It's pretty cool. It has a lot of Chrome drivers, a lot of web drivers, sorry, Chrome, Firefox. A fun fact, we initially ran a lot of tests on Firefox, and after switching to Chrome, we saw a significant performance improvement. And it was pretty cool that this plugin allowed us just to change the parameter somewhere, and then all our tests were, like, running on Chrome instead of that. And also, they have support for connecting with cloud-based testing services like Sauce Labs or BrowserStack. Sorry? Yeah. Yeah. But I think for Chrome, you need to install the Chrome driver, other than that, you need to install the Selenium package in your environment. So what I was thinking for you guys to take away from this talk is, and what we're planning to do on the project going forward is leveraging APIs to create data fixtures. So if you have, like, a REST API to create stuff and remove stuff from your app, try to leverage that, you know, like creating a blog post and then deleting it for test purposes, adding additional metadata into your page. So these are that convention between the development of the QA team that everything prefixed with something Selenium. It could be whatever, banana or whatever you want. And last but not least, defining, like, these test classes. So I have large tests, I have small tests, and put time boxes around them. And when that specific class exceeds that time box, then you really should look at failing that whole class and looking at why all of a sudden my whole large test cases, like maybe that's a smoke test, so it's running slower than before. That was about it. Thank you for your time. Do you have any questions? Do you know of an easy way to get video recording of these test runs? Not really. I haven't toyed around with that yet. That's an interesting point. I'll try to check up with the Mozilla program. Maybe they offer that. If not, maybe it's something worth contributing to open source. I know in the past, we were at a previous company, we were using BNC to FLB to record test runs and see why it failed. To answer the previous question, I know Mozilla has something because they use videos in PyTest HTML. So maybe just check the documentation for that. I hope it's documented if not open issue. Then maybe it will be. Any other questions? Have you looked into Splinter at all and PyTest Splinter plugin? Are you? I haven't. It's a convenience wrapper library in Python for Selenium which helps you write better asserts without going into the markup. So it's like instead of putting.sl markup everywhere, it helps you to write better tests that are less brittle. So maybe just a hint, it makes it easier to write nicer tests. Cool. I'm curious to talk to you about that afterwards. Thanks. How did you version your page objects if you were making changes in the application? How did you deal with having different versions of and like regression testing with Selenium? Good one. So what we're trying now is trying to get an element and if you can't get that hook element, you skip the tests and market the skip and say that this test was skipped until that code reaches the environment you're trying to test. Any other questions? Okay. If that's all, let's say thank you to Andre. Thank you.
Andrei Coman - Effectively test your webapp with Python and Selenium How often do you run your Selenium test suite? How fast do you get a result from it? Would you like to answer with: "Whenever I feel like it" and "Well, about the time it takes me to finish a coffee" ? This talk will try to get you closer to these answers. We will have a look at the lessons learned and the challenges my team faced maintaining a Selenium test suite against a long-lived Django web application. We will go over the pros and cons of: - test design approaches - technologies we used (nose, py.test, LiveServerTestCase) - reporting tools
10.5446/21098 (DOI)
Let's welcome Andres Sidel and let's find something new about how to create the production environments, secure production environments using Docker. Hello. Hello. Can you hear me? Yes? Okay. Thanks for coming. First of all, as my mate said, I'm going to talk about Docker security environments, and I'm going to give you some best practices you can use when you use Docker. First of all, who am I? I'm Andres Sidel from Vincorbis, which is a Mexican company, software company, and I'm a full-stack developer. I've been using Docker for two years, almost two years now, but I've been doing some dev operations, automating tasks, managing infrastructure. So nowadays, I don't know what am I. So let's talk about a little bit of the content of this talk. We're going to know how containers works, what is behind the scenes when you use Docker containers. And we're going to list the main concerns that you have to keep in mind when you use Docker, and how to create and maintain security images, because images are the base of security in Docker, and how to limit risk and good practices. This is a lot of tips that we're going to share with you. So how Docker works. The first thing that we have to keep in mind is that containers are not virtual machines. Virtual machines use hypervisor to manage the execution of a guest operating system. So containers are quite different. Containers are a bunch of processes. Containers can run services. You can install packages in those containers. Containers have network interfaces, but they are not virtual machines. They feel like virtual machines, but they are not. Containers are possible because of two feature kernels, which are C groups and namespaces. So what are C groups? So what's this feature? This feature limits accounts and isolates the resources of the host. And if you have CPU memory, this network IO, it says, okay, I'm going to give two gigabytes to this container. I'm going to enable these network features for this container and manage this as a hierarchical groups. All the children of a process is going to have the same limits. And what are namespaces? Namespaces is the feature that isolates containers. The processes will have their own system view. When you use containers, you, for example, do PS in a container, and you're just going to see just the processes that are running in that container. So you can isolate a file system, memory, user, and networking. That's the reason it's like a truth with steroids. Containers are truth with steroids. And then C groups limit how much you can see and namespaces limit what you can see. Okay. And next, what are kernel capabilities? In a traditional unique system, we have two kind of processes, which are privileged processes whose effective user ID is zero. These are the root processes, and privileged processes whose effective user ID is not zero. And these processes are attached to full permission checking based on the process. Permissions, et cetera. And Linux kernel capabilities allows us to find these access control system. For example, a capability, this capability shown, make arbitrary changes of the file. This allows us to change the permissions of files, and we can make these changes in every single file on the system. Okay. If you browse the code of Docker, the source code of Docker, you're going to see a list of capabilities. This is the default list of capabilities that Docker has as default. And if you want to see a list, a complete list of the capabilities that are supported by kernel, by the kernel, that's the URL. And you can study. So what are the main risks when you use Docker? What are the concerns? First of all, the Docker daemon, it requires root privileges when you use the Docker daemon. So if you control the Docker daemon, you will have access to the root. And if you enable the RESTful API, this is not authenticated by default. So if you, if an attacker discovered your API, remember that if you have root access to, well, if you control the Docker daemon, you will have root access or root privileges to in the host. So how can I secure the RESTful API? First of all, you can enable the TLS by using the flag TLS verify when running the daemon. And you can create a CI server and client keys. That's for authentication, but what about authorization? Docker's out of the box authorization is all or nothing. You can do everything or you cannot do nothing. So Docker provides a generic API. So you can create an authorization plugin by yourself and bypass this problem. And escaping. Escaping is another concern. This is caused by allowing privilege operations, not removing all possible capabilities, weak network defaults, and obviously boxing application code. It means that containers sometimes have a lot of capabilities. And if you do, if you add other capabilities that you may not need, this could be a problem. Remember that a user in a container with root capabilities could be wrote in the host. So how can you prevent this? Well, I'm going to explain each item of this list. So first of all, drop capabilities. As we saw capabilities, with capabilities we can perform root operations or operations that requires root privileges. But for example, in this case, in this example, I'm dropping all the capabilities. Basically running a container like this, you can just run a clock, for example. Basically you can do nothing. Drop a single capability. You can drop, for example, a drone. And this container won't be able to change permissions. And you can combine flags, for example, drop all capabilities and add the capabilities that you're going to use. And I think you're going to start asking, how can I know which capabilities I'm going to need? So the answer is you have to keep in mind which capabilities you could use or you can store them. And if you think that you have in your code a process which could use a strange capability and you are not secure, you can run your test. If you have a set of tests, you can run them and see if you need or drop on or add a capability. Remember that containers just have to be containers have to have no more than they need. Okay. And this is more easy. Enable AppArmor. AppArmor is a linear security model which is in charge of secure the operating system and the programs. So AppArmor uses security profiles to create a granular configuration over capabilities for your containers. And then if you are using Ubuntu right now, you probably is installing Ubuntu and it's running. You can check it with this command, AA status. And this is going to list. This command is going to list the profiles that are loaded. And well, once you create your profiles, you can load them with this simple command. And if you want a container with this profile, you just have to indicate what's the name of your profile. And that's all, sometimes it's quite simple. And there is this tool, which is named is Bain. It's used for create profiles in an easy way. And define a user. This or most of the cases is better if you create an user inside your Docker files with the user add command and add a user directive, not your containers with root access. And multiple containers. This is a big topic. This could be another one hour talk, but I'm just going to list the benefits of this approach. Basically the benefits are limiting containers, sorry, limiting attack scenarios, helping prevent compromise your containers, simplifying development, allowing for easy upgrade paths. And it's very easy to run just with the flag grid only. And you can freeze the file system. If you run a container with this flag, and for example, an attacker breaks out the container, the attacker won't be able to write any file or edit any file, nothing. So it will be better. And you can combine it with using volumes. So you can freeze the file system of your container, but you can add a volume and write on that volume. So you can combine these operations. Another big concern is image provenance. So when you use systems that communicate among networks, trust is the central concern. So when you use Docker engine, you pull images and you push images. So how can you verify that you're getting the exact image that the developer has created, or how can you know that the image has not been tampered with? Docker has solved this problem with Docker content trust, which basically signed the images with certificates and using digital signatures. And it's very easy to activate. Just export Docker content trust, and that's all the first operation that you do as a publisher. For example, build run or pull images, and it's going to work with this feature. If you're a publisher and you are using Docker content trust, the first time Docker content trust is going to create the keys. And everything is behind the scenes, so you don't need to worry about nothing. You don't have to learn a special combination of the commands, et cetera. And why Docker uses Docker content trust and not a GPG? Because Docker content trust creates a digital signature with the timestamp. So you can enable it, you can disable other images. For example, this image is not longer available, so you have to download this. So with this approach, you will have the up-to-date images on your containers. And of course, you have to create secure images, which is the next topic. How can you create and maintain secure images? First of all, verify the software, this is very important. You have to verify the authenticity of the software that you are downloading. When you're using package management, this takes care for you, so you don't have to worry too much. But if you are downloaded raw files or binaries, you should use, for example, HTTP ease instead of HTTP. And you have to check for sign files and balick checksums with a GPG, for example, when it comes to third-party repositories. Obviously, you could use this in your batch scripts or your background files, not just Docker, writing better Docker files. And this is important because sometimes if you want to have consistency in your images, it's better to pull the specific tag. For example, from Alpine 3.4 instead of Alpine. And never run as root. Well, this is like the... Tell me, okay. This is important. This is super important. So always add the user directive, so if you use the user directive, maybe you need the user add comment and drop privileges as possible. And if you have to use sudo, don't use sudo, it's better to go through. So you can use minimal base images. Ubuntu and CentOS, for example, have 60 megabytes. And if you use Alpine, it's a minimal base image, it's five megabytes. And you could reduce attack surfaces, complexity, and size of the images. This is an example of using Alpine. With these lines, you can install the Python runtime. Very easy. And APK is the package manager of Alpine. So other best practices, override whenever possible, especially when it comes about security features, the Docker demand and client, the Docker engine, about using Docker with privileged flag. And this flag is going to remove almost all the limits that container have, so provide non-security. I will provide access to the Docker user or the Docker group. As I mentioned, if you have control of the Docker demand, you could have good access in the host. We provide access to the Docker unique socket or the rest API to potentially introduce colors or containers. Especially when you use Jenkins, for example, your Jenkins manage the Docker demand in a certain way, you have to keep in mind this tip. Then using Docker bench security, this as the repo says, have thousands of common best practices around Docker containers in production. It's just a script and it's elevated privilege for running it. Remove said UID and CGID binaries. I'm sure that you're not going to need them in most of the cases. So it's better not having them. When exporting ports or exposing containers to the network, you know, but Docker exposes to all interfaces. So you have to be sure that you are exposing the container, the network, to the right interface. And follow best practices when writing Docker files, on Internet, you're going to find a bunch of information about this topic. And limit the container intercommunication by default. You can communicate with other containers. Meaning if you're not using the flag link, you can send raw packages and limit the memory. This could help you to prevent from those attacks. So we have... This is the guides that I base this presentation on and Docker is working a lot to provide a lot of documentation about security. So thank you for listening and questions. So questions? Hello. So I had a few surprises using Docker with IP tables because it injects certain rules for networking. Do you have any tips on how to deal with that elegantly so I don't write some rules and then notice that Docker is actually bypassing them? Is your container... Is the purpose of your container manage the network or what you're going to... So I had a container, basically I was running Kibana. I just needed to expose Kibana port and limit a bunch of everything else. So I wrote some IP table rules on my host and then I realized that basically Docker had inserted... I wanted the Kibana port to be only accessible from local host and then I noticed that basically my P tables rule saying only from local host except we're being bypassed because Docker inserted its networking rules and they were actually short circuiting my rules. So it seems to be a common problem. Do you know of an elegant solution? It depends of the stack but we can discuss it if you want after the talk. Any more Python specific security issues with Docker? For Python, I've been using Docker with Python for almost one year and I think that this advice applies for almost all languages but especially for Python, it could be... I don't try to create in-mode table containers. So if your code has a vulnerability, you can drop it with the read-only file systems. So that's all. Any other questions? Okay. If not, let's say thank you once more to Andres. Thank you very much. Thank you.
Andrés Cidel - Create secure production environment using Docker Docker is a relatively new technology platform that helps teams develop, deploy and scale applications with greater ease and speed. However, there are doubts about using Docker in production environments. One important reason is that containers don't provide the same security layer as hypervisors do. The purpose of this talk is pointing out that using Docker in production is perfectly valid, not just for develop and CI environments. We'll learn: - How Docker works. - Main risks. - How create and maintain secure images. - How defend containers. - How delimit security risks in containers. - Best practices for running containers.
10.5446/21099 (DOI)
They will test the Untestable with Andrew Barrow's. Let's give him a hand. Cool. Hello. Welcome. My name is Andy Barrowes. That's my Twitter. A little bit about me and what I do. I work for AHL. I've been there for nearly 10 years. AHL is a systematic quantum hedge fund based in London. We're in the business of using computer algorithms to invest on behalf of our clients. And we do that in Tarnian Python. If you've got a code base written in Python or any other language managing billions of dollars and trading around the clock and around the globe, then you're going to want to have some good tests. And for us, we find that mocks and mocking play a part in providing good tests. Hence the talk. Bit more about my AHL. We, a little plug, we host the PiData London talks. We run a coding competition encouraging students to get into Python. You may have seen Charlie plugging that in a lightning talk. And we do a HoloDroping source stuff. We use lots of open source. We make our own stuff open source. Check out our GitHub. And my boss would love it if you were to follow us on Twitter. He's even put it on my back. I think it's in case I get lost. If you see me roaming around Bilbao, and you can tweet my boss and say you saw me. Cool. Bit about the talk. The talk's going to be really example-based. There's going to be some theory and definitions and some of my own opinions. I'm not going to go into all the deep workings of Mark and the full richness of the API. Helen gave a really good talk yesterday, actually, which when DeepenList does, this is a beginner's guide. Has an intermediate. So if you want more meat, you should watch that on YouTube. It was really good. But yes, I'm going to introduce Marks and give you enough to get started if you're not already using it. All my examples are in PyTest and Python 3, but that shouldn't be a stumbling block, really. You would hardly notice if you're not already using those technologies. And all my examples are available on my GitHub. So you can get them yourself and run them. Cool. So why are you here? Hopefully, you're not like the guy with the beard, and you are actually writing some tests. But if you're not writing tests, I'm hoping to give you one less excuse for writing unit tests. The excuse that mocking is somehow mysterious, hard, or not for you. Then I hope to spell that myth. If you are writing tests, I hope to give you the tools to make testing complex systems both easy and fun. Cool. So I said the talk would all be very example driven. Here's our first example. I was trying to find some common grounds. I figure after we've all been locked in this conference venue for five days together, the one thing we have in common is a deep expertise in coding conferences. So all my examples are based on a mythical system that models coding conferences. Sorry. Here's the first example. It's conference speaker. This class does just like what I did. Comes out, welcomes his or her audience, and introduces them to it to handle. And with this, we get our first definition, system under test. You may see this in the documentation when you read around the topic. It means exactly what you think it means. It is the code that we're testing. So this class, and specifically this greet function, is our system under test. Cool. So how do we test it? It's not very long, and it looks like it should be as it's under test. We can just create an instance of a conference delegate. We've probably got a class without lying around our code base already. Pass it in, see that it does the right kind of thing. Easy peasy. Unfortunately, just like in real life, all our conference delegates are Twitter-enabled, tweeting continuously. Just for reference, when I put up a load of code, and you need to look at a bit of it, I'm going to highlight the bit that's most important in green. So feel free to read the whole lot. But if you want to save time, just skip to the green bit. So if we were to use this class in our tests, then one of two things would happen. Either our tests would fail because either whether running doesn't have access to the insect, maybe it's behind a firewall, or it doesn't have the connection details it needs, the passwords, keys to access the API it's calling, or the test would pass, and we would spam all our loyal followers with test tweets. So either way, we don't want that to happen. But this is Python, right? So it's easy. We can just make something that looks like a conference delegate. If it cracks like a conference delegate, it's probably a duck. So we can just make a test delegate class, make an instance of that, pass it into our system of the test, make sure the right kind of thing happens. And that totally works. And you can go home and do that, and we can stop the talk right now. But the talk's meant to be about Mock, so maybe there's a better way. And it seems like it could be a lot of work if we had to do this every time we wanted to essentially mock out the clues in the name, a class or an object in our code base. So we can use the Mock library. If you're in Python 3.3.0, you can import it from unit test.mock. Previous to that, you can pip install mock to rolling back port and import from Mock. We're able to create an instance of the Mock object, pass that in to our greet function, and then we're able to assert the right kind of thing happens. And we'll look at that in a bit more depth in a bit. Cool. So let's have a quick detour and actually look at what we've got with these Mock objects. The most important thing you need to know about Mock objects is that everything on a Mock object is another Mock object. Every attribute is a Mock object. Every method is a Mock object. The return value of calling it is a Mock object. The return value of the methods of Mock objects, the attributes of the return value of the methods of Mock objects, it's mocks all the way down until you hit the turtles. But what are we talking about as well with this mocks? I mean, when I first started doing this, I was quite confused. I worked in a team of people who used a whole load of words that all seemed a bit interchangeable, but had some subtly different meanings. So before we go any further, I just want to define some terms in case either you're coming to Python from another language or you routinely use more than one language. Or like I was, you work in a team of people who just use completely random terms all the time and you're trying to understand what's going on. The first term is a test double. Think of this like a stunt double in a film. It's a really general term. It just means any pretend object using testing. And certainly in Python, we would use mocks, and mocks are definitely test doubles. So there's a first translation. Fakes. Fakes aren't mocks aren't fakes. When you fake something, you're using a real implementation, but it's taking some sort of shortcut. So maybe you're using an in-memory database instead of the real thing. So that's not what we're talking about, but you might hear it. dummy values. In Python, dummy values are sentinels, and we'll cover those in a bit. We use these to pad out argument lists, to trace the flow of data through our code, and you'll see that used. Mocks, now that's what we are talking about. We've just seen one in action. It has no real implementation or any pretence implementation, but it records everything that happens to it, and you get to a certain that later if you want to. And then closely associated, and certainly in Python, we use the same object. We use mocks for stubs. Stubs in the kind of wider world, and talk about this, a stub has more implementation behind it. It makes more of an effort to pretend to the system or the test that it's the real deal. And we can implement a stub in Python using a mock, and we have a side effect. We'll see that. And for spies, spies are mocks. As far as we're concerned, that's not a split-hairs. Cool. So we've kind of all that kind of theory behind us. Let's go back to our example. In Python, we use action assertion mechanics, which is a slightly fancy term to mean that first we arrange our test environment, then we cause our system of the test to act on it, and then we assert that the mocks saw the behavior we expected. And it's really important to notice that what we're doing here with mocks is we're actually asserting on the behavior. So we're asserting on what got called not what the state of the final system was. Here, I use one of many asserts that are built into mock. I use assert called once fifth. There's a whole range of them. I'm not going to go into all of them here. You should check out the docs. The doc's really good. Or Helen's talk yesterday went for a good number of them and provided a good overview. I'm just going to show you my two favorites. The slightly opposite ends of the spectrum. Assert called once wave does exactly what it says on the tin. We assert this mock was called once with these arguments. And if it's called less than once, so more than once, or with different arguments, it fails. At the other end of the spectrum, the kind of superpower of these assertions is mock calls. What mock calls does is it records every call to your mock and to all child mocks, and it records the order of them. So in the example at the bottom, I'm just in an interactive shell. I make a mock. I make various calls to it and its children. And then I get to see what the mock calls are, and I could assert on that. And you notice that it's found everything I did, and it knows the order they happened in. So that can be really powerful. And there's pretty much nothing you can't test with that. Another flip side, another tab downside, I suppose, of it being mocks all the way down, is that if you pass a mock object into your system and a test, and maybe there's a typo in there, so instead of calling speak to, it's calling spoke to, or it's just calling a method that you've not implemented yet, it doesn't exist. The test may still pass because mock will create those methods, those attributes on the fly for you. If you want to limit your mocks to only having the same interface as your real code, you can do that by specifying a spec in the constructor of the mock. And this is the interface, essentially, you want for your mock. In the example, at the bottom, I just interactively create a mock that has the same spec as a conference delegate, then try to get it to snore loudly in the session, and it fails because you definitely can't do that. Cool. Let's have a look at a harder example. Before we saw our conference delegate class, and we highlighted that we couldn't use it in our other tests because it was tweeting, so we just mocked it out completely. What if we want to actually test this class? So we've got the same problem, it's going to tweet all the same problems. So how can we do it? Well, we're going to use a mock, of course, but last time we were able to just pass our mock objects into the system of the test. This time we can't do it. The thing we want to mock, which is almost certainly simple Twitter, isn't passed in, it's imported. So how are we going to get our mocks into this code? We do it using patch. Patch is a great tool. What it does is it replaces the specified object with a mock, and then at the end of the patch, it puts it back to normal, so it completely covers its tracks. So what we're able to do is we're able to patch simple Twitter.tweet, call the system of the test, and then assert as before, and that mock gets created and injected into the right part of the code. The one gotcha is that you have to get that string, that specification of what you want to patch, correct, and it has to match how the code is used in the system of the test. So if, for example, instead of importing simple Twitter, we did from simple Twitter import tweet, then not only would the call site change, but also would the patching. In my experience working with a number of people, introducing them to mocks, this is a thing that people always struggle to get their head around. It's worth taking a moment to think about why it happens. Essentially, when we're doing from simple Twitter import tweet, we're creating a new variable in this module called tweet that is a reference to the code in the simple Twitter module. But then if we were to patch simple Twitter.tweet, we are replacing the code in simple Twitter, but our reference, our tweet variable in this module, still points to the old code. So we're patching the wrong thing. So we have to get it right. You don't need to get it wrong many times before you kind of spot what you're doing and learn, but it's worth looking out for and worth thinking about. Another thing to notice with this example that I was showing is that I'm using a Sentinel. I mentioned before that sentinels are a kind of Python word for dummies, and what we do here is we pass a Sentinel into the initializer, and we check that it comes out in the call to tweets. It's a bit like dropping balls on the top of some sort of machine and seeing which hole at the bottom they come out of and can be really handy. Another use for them is just a pad argument list to calls when the values aren't important to the code you're testing. I really love sentinels, by the way. Brilliant. You should use them the great. Okay. Let's dig a bit further into our fictional application and look at the wrapper simple-tweeter that we've built around our Twitter API. What it's doing is it's adding just a simple retry functionality on top of the underlying API. The underlying API would return false if it failed to send a message, whereas you can see this code is picking up on that and trying five times to send a message regardless. We can write some tests for this. We want to... I've said before that the return value of every call to a mock is another mock, but you can override that and you can actually set the return value. So here I set the return value to false, and that means every call to tweet will fail, and then I can assert that it tries, but fails, and ultimately gives up. Also notice in this example that I've been using patch this time as a decorator instead of a context manager as before. These two things are completely interchangeable. You can use them... They work exactly the same. One reason you might use patch as a decorator, especially if you're patching more than one thing, is if you're using it as context manager, your code would slow... As you had more and more patches, your code would slowly disappear off the side of the screen, whereas, of course, stacking them up as decorators, you don't have that problem. But interchangeable, you can do both. They work the same way. Here's a lot of kittens. Oh, don't you love kittens? Cool. All right. What if we want to test this retry functionality? We don't want it to always fail. We want it to succeed sometimes. So we want it to succeed on the third attempt. We can't do that return value, but we can do it with side effect. So I set the side effect to a list, and the first call is going to get a failure, the second call is going to be a failure, third call is going to be true success. Side effects are like return values, but they're basically just more magical. So there's three types of magic supported. There's the sequence we've seen, where subsequent calls to the mock return different values from the list in turn. You can set the side effect to an exception. So every call to the mock raises an exception. That can be really handy if you want to test failure scenarios without having to actually, you know, manufacture failure somehow. And the final and most powerful form is setting the side effect to be equal to a functional lambda, some other callable. And the return value of that function becomes a return value of the mock. I like the first two a lot. We've no caveat. The third one I do like and is powerful, but I just want to kind of raise a little red flag that if you're doing this a lot, and you might want to just look at your code and just make sure that there's not some better way to structure it to make it easier to test. It seems like you're having to use some quite, if you're doing this in all your tests all the time, maybe there's some light up. You can construct a code differently, so it's easier to test. Having said that, it is useful and powerful. So let's have an example of it in action. Let's give our conference delegates the ability to chat to each other over coffee. So here we've got some sort of implementation where our delegates can hold a conversation and do whatever they like. And we can test this by creating essentially a stub. I mentioned stubs before. They're kind of mocks, but with more faked-up implementation. We create a function, strange speak to, which holds one side of the conversation. We set that as a side effect for a stranger mock, although now it's probably a stub, which doesn't really matter in Python. We just call them all mocks. And we pass that into our test. And we can test that code if we want to. But this can look a bit, I mean, it's not the prettiest test, I would argue, that it looks like, you know, you have this quite nice sort of setup, run it, assert. But then in the middle of it, you've just got this big chunk of code and what's that doing there. So a little plug for my own library, mock extras, it's built entirely on top of mock. It doesn't implement its own thing. It just makes it kind of prettier and nicer to create side effects on your mocks. So it gives it this kind of fluent interface so you can say when this particular mock is called with these particular values, then return this. And these are all quite simple examples. It supports some quite complex things. If it looks nice to you, check it out. It's on our GitHub. It's on RooTheDocs. I mean, to me, it just makes the test read a bit more like a story and makes the test slightly better documentation for the code of testing. But your mileage may vary. Cool. So we've now learned about mocks and patching sentinels, and we're ready to go and save the universe. But before we can do a warning, beware the dark side of over mocking. What do I mean? So I think most people agree it's uncontroversial. We will mock third-party APIs. We'll mock out course mail servers, web services. We'll mock out things that make our code undeterministic, like randomness and time. And we won't mock out stuff that is built into the language. We're not going to mock out numbers and lists to pull strings and stuff that in your world is as good as built in. So we're not going to, if you're doing a lot of NumPy, app handers, don't mock it out. Leave it in there. And there's probably, in whatever you do, there might be other types, which are like the oxygen you breathe, and you shouldn't be taking them out. But it still leaves a lot of other code. And what do we do of it? If you read the internet, you'll find people ranting on all sorts of things, as you know. But there's two schools of thought on this particular problem. There's a classical TDD approach, which says that we should only mock or stub or fake objects when we really have to, and we should try to use the real objects whenever we can. And there's a mockest approach, which would value having unit tests, testing a single unit as being the most important thing. So they would mock everything. Where do I stand kind of pragmatically towards the mockest end without being crazy about it, but with a real eye on not wanting to make my tests over mocked? Over mock tests are brittle to changes in our code, expensive to maintain, but tempting because they're easy to write and boost coverage statistics. A quick example, we're going to add a feature to our conference delegate, which is the ability to rate talks. Rating talks is good. It's good for speakers. It's good for conference organizers. You should rate this talk, but be kind. It's my first ever one. You should maybe consider using the ISO standard for talk rating, which, as we all know, is the number of kitten pictures in the talk multiplied by the sum of how useful it was and the clarity of the presentation. So how do we test this code? We could go crazy, and we could write, we could put mocks in instead of our numbers. This is a contrived, horrible example. You would never do this. But to show how bad over mocking could be, let's look at the worst ever case. So we pass mocks in, and then our asserts end up looking like some kind of weird, twisted, inside-out representation of the original code. It's a horror to behold. It'd be much better to test our code in this case with meaningful examples and edge cases. And you should do that when mocks don't fit. OK, to summarize, do write tests. Do use mocks. They're easy and fun. Patch is brilliant. Love sentinels like I do, because they're awesome. Use function-side effects as an occasional treat. They are powerful, but don't overdo it. Never overmock. Love each other. And thank you very much. I believe that we have some time for some questions. So do you have any? Oh, yes. The disclaimer is that the question was, I have a large disclaimer. And the answer is, yes, I do. It's a side effect of working in the finance industry on my third. But basically, to summarize it, it says, if you use this talk as investment advice, don't. But do read it. Don't waste my summary. I don't want to undermine the small print. OK, any other questions? Maybe it's an easy one, but I was just wondering, what's the real difference between magic mock and mock? I've seen some early mocks. I kind of skipped over that. So here, I use magic mocks. Magic mocks are just the same as mocks, but they also mock most of the so-called magic or dunder methods. So they're kind of the operator overloading in Python. So because here, I wanted to test that things were multiplied by each other and added to each other. And that's the dunder mole and dunder add methods. I need to use a magic mock. As a quick aside, patch always puts magic mocks in. But really, you don't ever need to worry about the difference. They just kind of work. OK, any other questions? OK, if you don't have any questions, this talk will be followed by the lightning talks. So in 15 minutes, so let's thank once more Andre. Thank you.
Andrew Burrows - Testing the untestable: a beginner’s guide to mock objects Mock objects can be a powerful tool to write easy, reliable tests for the most difficult to test code. In this session you will learn your way around Python 3’s unittest.mock package starting at the simplest examples and working through progressively more problematic code. You’ll learn about the Mock class, sentinels and patching and how and when to use each of them. You will see the benefits that mocks can bring and learn to avoid the pitfalls. Along the way I’ll fill you in on some of the bewildering terminology surrounding mocks such as “SUT”, “Stub”, “Double”, “Dummy” , “mockist” and more and I’ll give a brief plug for my own mockextras package that can enhance your mock experience.
10.5446/21101 (DOI)
мотря кся ändern acre favors, it's really difficult to find useful information especially benchmarks comparing accuracy, and quality of the search, and that's why it's really difficult to select search engine for your project if you insigniants kenh d justify the word ou in. Two object gingfa yofn yofm yofano fä hull next- beings like For courage gräutral In disease I have 26, environmental concerns I can't bleave but 18year strong go there was no google and other websearch angels are around back then and Astr-LaVisto, Hoth-Boat, Inca Tome and all the websearch But it's more as little believable that 26year forgive are bought there was no websearch at all now the world is rapidly changed and the volume of information available and bandwidth give us the opportunity to get this information but unfortunately the processing rate which human being can and human being can consume information does not change much and this inevitably means transform searching from something that only geeks Geeks accole rika atile landlord чески rely on rière announced acre remem z replied 他 checkout Beij intentionally simple sex charge ètek źri tseadsBE ään a chih relief 迄 í Eh і problen anyon attractions Grammy idee every developer do sam bestimm quadratic donors concur impacted  too. вамиойти alla 니 егодня venge hend 1939 conteú unterwegs 움 pew is arn engag BRUN acontec cape צם dolph род imperative meistihat existing chlagen hape ippi ensitive ril guardian țiËa వర понад werde tijd n should MOGuin is komplett focus all is thats It's direct search. When you have pattern and search. But what if we talk about full-text search and they provide capability of identifying natural infrastructures for certain requirements and everything, along with matching the equation of all the requirements for query and story, And you can find search index A выглядит Darren enferёph indiks genuinely September two ταąся, fictional computer,rap roups, 最 ıre s steel s aff Cai Ṣo nía, ćōnwax ad fumpli demandingkuips d А enmare—stars Scale is additional computer storage And time to create index or a Russia because data can change Let's image a simple example. Giant target is in ring context Division is where pannt respace� ഒാനഗര്ൈനിൃ൏ോടക൝ുേ്് മരൈ മരി്ൈ്്ന്െ്. ướpipe- by bage, ö beer,. αν bild <|th|><|transcribe|> ör f ce毆 ෣ caminho ස Set ස ස ස chimeя<|fi|><|transcribe|> <|th|><|transcribe|>usch oso Swami Sova negro os죠 Meso outdoors Lautز after only qué,손 s see example we have simple query and simple text, and we try to full text search through the query in the text of Postgres, use 2 functions text search vector transform data and special function and text search query Greers to end toリだ consequences of result Okay cheese Next I mean 來ジャ not yes 찬 wil Israelis So by the index might reduce false matches, because it has a very limited hash function for–for search–for text which we should try to search. It means that it can represent the same phrase with the same ID and you can find false match. But the differences between these index is very simple. с Speech– tural,tighting for дней, departments, currently 真的 create整wanî s tói Two. When you have data which static it means change not so often, that is why you can use first one, if you have dynamic data which changes every day, every minute, every second and you try to search you should use generalised search tree. siis Bharat dwi tut shaving du stepping camps the n and tons of fe Process Pak is difficile rang close than cities lantern ¿Welka kö Ky Shit justification, and you can do that. ț written Gugb ț written what say note t this.qing vote. ț written. norte анаŋ norte hanaŋ мет – n d d g h d g h Isso inside the system. As a result you see special format for PostgreSQL where you see only useful information exclude top-words like less top-words instead of because of unnecessary no need to search using these words in some cases. Next, a little bit about Python. uniforms young seong and and on loăm and it Old version of Django model is Django RAM extension. It's written couple years ago and it's working perfect with old version of Django. The third one is using SQL Alchemy. Some example how to apply to your project. If you already have some model which called page, you can just create search index, it's special field, and overwrite your search manager where you should add configuration and search field, and after update search field means on each save update or delete index will automatically updated by Postgres. And as a result you can do some very common or RAM queries using keyword search. You can search documentation and about. Only limitation that Postgres provide and as a result Django also it's very simple query construction mechanism. It means you can use only two Boolean operators. It's and or or and you see that in second example I provide example with about or document or Django etc. According to Django 1.10 itself you can as you can understand they edit by default using underscore underscore search for each field. That's why you can also use it without any installation like I show in previous slide. Or you can annotate it with search vector and filter by cheese and see results. It's awesome because it will convert it for direct text search query, text search vector, SQL query and Postgres will execute it very fast. And examples. This commit was made by... I'm not sure but couple months ago it's super fresh information. There is no any documentation about this only in source code in this commit you can find it. Maybe they will update it but I'm not sure that it's already done. Okay let's talk about finish with Postgres full text search. We have pros like quick implementation. So it no dependency. Maybe disadvantages is need manual manage indexes because it's not done automatically. Dependent Postgres if you use my SQL it will not work. If you use another database it will not work. No analytics data. What I mean about this is this. I can't get analytics on search from Postgres. I can only search and that's all. If I want to get some important natural language text data I can't do it. And very simple query builder. Okay let's continue with elastic search. Elastic search is distributed scalable real time search and analytics engine. It's very important because it enables us to search, analyze and explore your data. It based on Apache Lucene search index which now is the most advanced and high performance in the Internet. Who use elastic search? It's GitHub use elastic search to query 130 billion of line of code. You everyday do it. Stack overflow use combine full text search with geo location. Sometimes it's very useful. Guardian parse logs like lots of companies and Wikipedia try to provide full text search highlighted data and data doc, cloud and other. The idea with elastic search is very simple. It's not quite equals but it's like in parallel you can understand how it works. You have relation database, you have elastic search, you have database, you have indices, you have rows, you have types, columns equals documents, tables equals fields. The most important is maybe logs. Elastic search use optimistic concurrency control. It means when you try to change document in elastic search they just update it and update version of this document. And it means when you search for some document it will use the last version of the document. According to elastic search there are lots of python clients. It's default python client, new version made by Honza Krall with Async.io and also DSL when you can build your queries if you work with elastic you know how it's difficult sometimes work with these big JSONs and manipulate with it. It's annoying and Honza create DSL and it looks pretty awesome. Some examples. You can get data, you can create index with number of charts, number of replicas, you can scale it. You can add JSON to index. It's just like how you create data for your index. You can manage stop words. For instance you can add list of stop words you can highlight results. My favorite feature you can select tag because sometimes it's also useful to select your predefined tag, not use default one. And relevance. Relevance is you can explain query and you see what's wait and I remove lots of details but it's big big explanation why this query return this results and which each value of wait and I like it. It's difficult to do in Postgres but it's really easy to understand and calculate why this results first if you for example override your relevance function or rank function etc. Okay. Next, very quickly, it's Sphinx. I only put on slide differences. Sphinx written in C++ I found back and it's used for example mySQL as data source and in comparing with elastic it's written not in Java and Sphinx assumes that you already have mySQL database and all other stuff based on mySQL but it's not like mandatory you can use Postgres you can use any provider and about Sphinx search server a little bit differences it's DB table it's Sphinx index, DB rows, it's Sphinx documents and DB columns it's Sphinx fields and attributes it's not similar to Postgres and elastic search maybe similar to elastic search and query language it's not SQL it's Sphinx query language but it's very similar to default SQL and you can find from your test one it's index name where much you have your Python and it will look something like it I put only differences all other stuff very similar to elastic and last but not least it's pure Python HUSH which created by Matt Chaput his idea was like ok my clients have no ability to install Java and that's why he create full tech search engine in pure Python and it's not super fast but in comparing with another pure Python search engine it's super fast and it has plugable scoring algorithms you can add lots of and configure lots of stuff and by the way more information you can find on his talk I'm not ready to repeat it and some small examples HUSH depends a little bit on Postgres because it use Postgres for example stopports and it's create frozen set dynamically for Postgres stopports but you can select any set of stopports I mean it's just example from source code you also can highlight search results assume that we have hits in title and the most interesting it's bestmatch 25 algorithms by the way it's run in function which use to search engines to run matching documents according to relevance to given by search query it's the common use algorithms and it was developed in 1970s I hope now I created some comparison table for you when I started work on my first project with full-text search it was difficult to understand lots of information and how to structure it, that's why I created some table where I can find and you see that Python 3 supports most search engines, Winx, OpenPR you have lots of clients, you can use this table like reference interesting that Postgres and Elastic have both async lines Sphinx and Hush know and I added Django just for example if you use Django sometimes you need some or a RAMs etc that's why you can find Hashtag very useful but talking about Hashtag it's like provides modular search for Django and it creates one API layer under couple of different different search engines and provide you Django like Django or a RAM functionality for search but I can't believe that it's really useful and you can use full-text search and then you decide ok tomorrow I will use Elastic search today solar and the day after tomorrow Hush and etc, it's strange because it's like only very very simple set of features or other features different in Hashtag that's why I called Hashtag like swiss knifes and I created small prompts and cons for you about Hashtag yeah it's easy to set up looks like Django or RAM search engine independent support now for engines if you go deeper search query set API very poor I mean it's very poor you can't create very smart queries difficult to manage top words because you need go to search engine back end and do it by by hand by yourself Hashtag doesn't care about it lose performance because you need like convert results to search query sets and work with that maybe in memory and model based it means that most full-text search engines try to promote no SQL concept when you have like object or when you have document not model not one table that's why it's a little bit difficult and the most I think the most ugly with Hashtag it's lots of hard coded settings in search engine if you open source code of Hashtag you can find hard coded elastic search settings hard coded settings for solar etc and it's annoying if you want to change something you need to change Hashtag or patch it or something like it let's continue with my table next very difficult and interesting things which index each search engine use and I put like elastic user patch you can find more information about it it's default inverted index as I said before Postgres use generalized inverted index and generalized search trees Sphinx has 3 opportunities it's disk indexes real time indexes and distributed by the way distributed index it's just like container for lots of disks and real time indexes it's how you can scale your Sphinx and Hush use very simple index folder as I said before guy who create Hush he said just you have only Python and folder without any database Java etc that's why he use simple approach and last column it's interesting because when you have database you need to search in the memory without creating index and it's possible only for Postgres I like this feature because you can use it in all databases no need to create if you want you just need to create index but you can search all other search engines you need to put get data from data source put it to index build index and then you only then you can search but Postgres can do it in real time for you next interesting it's ranking relevance and etc it's how which probability algorithms each engine use for search elastic use very common term frequency inverse document it means how often your term or your query occur in the whole document database and according to Postgres we already talk about CD rank you can it's interesting and you can put some weights for CD rank like input parameter but you can influence to CD rank formula how to calculate rank just only some parameters it's cool because lots of variants by default it use two factors first it's major part it's approximately between the document text and the query it's called longest common sub sequences or something like it and very common known best much 25 and hush use from my point of view the most smart relevance because it's improved best much 25 but interesting that you can replace any relevance function to hush and Sphinx has big table of lots of formulas how to I mean you also can configure it not like for Postgres or elastic search you can't do it according to configure stop words you can do it in all engines you can highlight search results all engines it's like common feature that you need sometimes it's useful to use synonyms and you can find that all these engines support synonyms only hush but you can do it manually to replace words or create dictionary which associate like one word with set of words like synonyms about scaling I would like to say that the most scalable it's elastic search because it works from scratch and you can use it for Postgres you should think about partitioning table inheritance etc about Sphinx I already said that it's use distributed searching and you can include lots of indexes in distribute index and it's how you can do it manually hush does not support any scales and in the end I would like to present for you some load test that I made in real production I have 1 million musing artists and I put it to each search engine and I try to search because most of load test that I found for search engines use like white noise they generate like combination of letters and try to search it's does not make any sense and performance result it's interesting because if data I put it in one table for example for Postgres after when I create index Postgres returning for 4 milliseconds the last version the latest which I found like 9.6 beta or something like it elastic return in 9 milliseconds it's also pretty awesome Sphinx returning 6 milliseconds but I'm not sure that I configured correctly maybe some results not super useful and hush also has less performance and the question only if you have more data which not putting in Postgres my next task for me I plan to do more smart queries and I have database with 300 million records which I'm not sure that I can put in one table in Postgres and yeah maybe results will be different in the end I would like to propose you to read some books which I found very useful for me about elastic search if you're interesting in Sphinx if you're interesting in Sphinx and very cool book about tone break book about database systems I created some list of references for you because it's really difficult to share with you details of each index and you can find it in some very useful links and read about it because when you stack it and your customer decide okay relevant should work I mean indexing should work that or this you can read about each index and find for you in which case your index will be more efficient also about ranking ranking is really difficult part that's why also put it to links you can read about each scoring how it's calculated because performance will depend on two big factors first it's ranking algorithm because you should calculate ranks and second it's indexing how you build your index and thank you slides you can find on this link and thank you for your attention and we are hurrying and question please any questions I got a question about operators in jungle full text search you mentioned that there are only ant and or operators can we combine it I mean John and Doe or full and bar by the way it's feature not about jungle it's feature of maybe this slide it's feature of hosgress all other questions yeah please what's a good way to compare the performance of different search engines not in terms of speed of response but in terms of the quality of ranking yeah it's I understand thank you for question it's what I'm doing on everyday basis I work with our application not just full text search through data we try to match users and etc by his interests and it means that the ranking it's very important for me and I have lots of tests for that how I built very big queries with and or with synonyms without synonyms etc I prepare expected result manually and I run my test and see results yeah it's like only manual unfortunately work it depends on your real task all other questions hi apart from HEDESTAC do you have any recommendation from Django and Elasticsearch Django and Elasticsearch from my experience it's like using just python client I mean you can create managed task will refresh your index if you plan to get data from I mean maybe you plan to store your data in Postgres or MySQL and you need like on some action you plan to refresh index and search from Elasticsearch I found great solution that you can just use simple python client Elasticsearch.dsl which honsa crawl maintain mostly and just but only add managed py command to refresh index create like asynchronous tasks for refresh index etc if you plan to use HEDESTAC I don't remember the name but I found interesting library which overwrite some settings from HEDESTAC and you can like add your synonyms change configurations etc and I recommend to use it if you plan to use HEDESTAC but problem of HEDESTAC is that it's not support last version of Elasticsearch and you will stack on I don't know 1.7.5 or something like it yeah please is there a reason why you haven't talked a lot about solar could you please repeat is there a reason why you haven't talked a lot about the solar search engine s-o-l-r ah solar I have no experience with solar but I hope that it will be more useful for me okay thank you very much and I hope that you will like it and I hope that you will like it and I hope that you will like it okay thank you very much
Andrii Soldatenko - What is the best full text search engine for Python? Compare full text search engines for Python. ----- Nowadays we can see lot’s of benchmarks and performance tests of different web frameworks and Python tools. Regarding to search engines, it’s difficult to find useful information especially benchmarks or comparing between different search engines. It’s difficult to manage what search engine you should select for instance, ElasticSearch, Postgres Full Text Search or may be Sphinx or Whoosh. You face a difficult choice, that’s why I am pleased to share with you my acquired experience and benchmarks and focus on how to compare full text search engines for Python.
10.5446/21103 (DOI)
We are with Janna Bagheel that is going to speak us about the Python byte code. So thanks to the speaker. I feel like we need, like, you know, stand up comics here to open up the crowd or something. Hi. I'm Anjana Bagheel. And yeah, I hope you guys are excited about byte code because I am. Can everybody hear me okay? Great. So, who am I? Well, my name is Anjana and I'm a Pythaholic. I've been addicted to Python for probably about three years. No, I am, right now I use Python as an outreach intern at Mozilla to do some testing work for them. But what I want to talk to you about today is some explorations into the core of Python that I started doing while I was a participant at the Recurse Center, which is a really cool programming community in New York City where you're allowed to just follow whatever excites you about programming. So today I'd like to tell you a little bit about an adventure that I had that involved getting started with Python byte code. I'm by no means an expert in it, but I just wanted to bring you along on my first encounters with it and show you why I think it's really cool. So while I was at the Recurse Center, I came across this puzzle. I think of it as a Python puzzle. It turns out that Python code runs faster if you stick it inside of a function and then call that function. Maybe you guys are already familiar with this. I was not. But for example, if we have a rather lengthy for loop that does nothing useful, it just evaluates a variable I for each I in a rather long string of I's. If we call that just in the global Python module, it takes quite a bit longer than if we stick it inside this run loop function and then call that function once. And to me, this was puzzling because looking at this source code, I don't see any real meaningful difference. In fact, all I see in the inside function version on the right is that if anything, Python should have more work to do because it's got to create a function and then call it. So I couldn't really understand from looking at the source code why this would be so much faster, the right-hand side. Turns out that looking at the byte code can give us a little bit more insight than looking at the source code for certain types of Python puzzles like this one. And that all has to do with what happens when we run Python code. So this was something I hadn't really ever thought too much about before. What happens when I actually execute a Python program? And today I'm just talking about C Python. A lot of this is implementation detail specific to the C Python interpreter. But hopefully that's what a lot of you guys are using. And differences between C Python and other interpreters are also really fascinating, but not the topic today. So when we're using C Python to run some Python code, we start out with our beautiful, Pythonic, easy to read, nicely indented source code that looks fantastic. And that gets compiled by part of the part of C Python that's called a compiler. It gets turned into a parse tree, an syntax tree, a control flow graph. What those are doesn't really matter for our purposes right now. After all just different abstractions of what we want our code to do, the important part is that ultimately gets compiled down to byte code, which obviously we'll be talking a bit more about in a moment. And that byte code, whatever it is for now, gets passed to the interpreter and is what the interpreter actually runs. The interpreter being a virtual machine that is performing operations on a stack of objects. So the interpreter executes that byte code and then you get out whatever awesome stuff your Python program is designed to do. Great. So this byte code, what is it? Well as we saw, it goes kind of in between, it comes at an in between place between your source code and the effects of your program. So in one sense it's an intermediate representation of your program. And in fact it's the representation that the interpreter itself sees. The interpreter unfortunately doesn't get to look at your beautiful readable Python source code, it only gets to see this byte code. So if we think about the interpreter as a virtual machine, we could think about the byte code as the machine code for that virtual machine. So when we think of more languages that are traditionally considered compiled, we think of taking source code and translating that into machine instructions for an actual physical machine. And in fact, it's pretty much the same idea, it's just that the machine is virtual and is the Python interpreter instead of the actual physical machine. And so since the virtual machine, the Python interpreter that we're dealing with is basically a stack machine, the byte code that we give it is a series of instructions for what to do, which objects to add on to that stack, which operations to perform on objects that are already on it, how to pop things off and return them back to us. So it's a series of instructions, the byte code is. And another interesting thing, if you've ever wondered about those.pyc files that pop up all over the place when you're importing Python modules, these are actually caches of the byte code. This is the byte code that the compiler has already spit out. And the nice thing about this caching mechanism is that since we saw that from source code to execution, we have those two steps, the compilation and then the interpretation, if we haven't updated the source code since the last time we ran the program, we can skip the first part, we can reuse the byte code that we already compiled before. So that's what those.pyc files are. And if you've ever tried to read one of those, to open one of them, they're gobbledygook, they're not meant for us measly humans to understand. So how can we humans read this byte code that's intended to be read by Python? Well, there's a really handy module called dis, which has a fun name. It stands for disassembly, so disassembling the byte code. The documentation is right up there, put the link in there. And this allows us to analyze certain types of Python objects to read the byte code for that object in a way that we humans can understand instead of looking at the bytes themselves, which isn't that helpful to us. So for example, if I have a really simple function that says, that is called hello and returns, can somebody help me pronounce with the basket here? Kai-cho? Kai-cho? Try. Anyway, if we dis this function, hello, we get our first peak at disassembled byte code. Cool. So the white lines at the bottom here are our really, really simple byte code. We just have two instructions here. And without really knowing what all these numbers are, what the columns are, what we're looking at, we can already get a sense for what's happening. We're loading some kind of constant, a string, onto the stack, and then we're returning it. Sweet. So let's break it down. What exactly are we looking at here? What does it mean when we see the output of dis? So we have a series of rows where each row in the output is an instruction to the interpreter. And on the left-hand side, a lot of the time, we'll see a line number. Two here is the line in our source code. So this is just for us to help us know how the source code lines up with the byte code. Not every line in the instructions will have a line number. As you can see here, the return value line doesn't have one. Because sometimes more than one instruction can fit on one source code line. So sometimes we only see the line number when it's the instruction that starts the line. And next to that, we can see an offset in bytes. How far into this string of bytes is this particular operation? That's not super interesting, in my perspective, for us humans. But what is interesting is the next thing, which is this string, load constant, which the source stands for, and that's the name of the operation. And in a minute, we'll look at some more of those and see what we can find out about all the different possible operations you could encounter when you're reading this disassembled byte code. If the operation in question takes arguments, which not all of them do, but if it does, then you'll see some information about the arguments on the right-hand side. So those last two columns on the right, we see the argument index, which interpreting that and what exactly means index in what object, that depends on the operation. There are a few different places that Python keeps track of the different values, like constants or variable names that you would need to carry out a particular operation. And that's all something you can look up in the documentation. But what's more interesting for our purposes now is the value of that argument, which you can see to the right in parentheses. And this is Python kind of giving you silly human a little hint about what it is that this operation is operating on. So some operations, we've already seen load constant, which takes an argument C and it pushes C onto the top of the stack, TOS. Then there are things like binary add, which takes whatever is already on the top of the stack, the top two items, adds them together and puts that result on the top of the stack. And then there's things like call function, which its argument is a bit strange. Its argument tells it how many positional or keyword arguments that function is expecting so that it knows how many objects to take off of the top of the stack and in which order to pass to that function. So there's a ton of these. I would not be able to cover them all, even if I had an hour or more, whole day. But they're all conveniently documented in the documentation for the disk module. So that's linked at the top of the page here. And for each of these operations, the names that we see, these operation names are just for humans. Python doesn't care. It has a number for each of them, of course. That's called the opcode or the operation code. And if you're curious about what the correspondence between a name and a code is for a given operation, you can use these attributes disk op map and disk op name. Op map is a dictionary where you can just look up a particular operation name and find out its code. And if you happen to already know the code, you can pass it to op name and it is an indexed list of all the sequence of all the operations so you can find out which code corresponds to which name. Just some convenience there. And so now we have a basic idea of how the disk function works, how we can disassemble some byte code. What can we use it on? Let's try to disk some things. Let's find out what we can disk. I love this name. Okay. So we already saw we can disfunction. Here's a nice little Python example one. We're adding spam and eggs. And if we disadd, we see we have a slightly, ever so slightly more complex thing to do here, which is we're loading two things on spam and eggs and then we're doing a binary add on that. Cool. Starting to get comfortable with this. What else can we disk? How about a class? It's a really simple class here. It's a parrot. It's got one attribute called kind. It's a Norwegian blue. This is money Python humor for anyone that's not familiar. And it has a method is dead, which always returns true. And when we pass that parrot class to disk, we see that it disassembles each of the methods on that class, so including the constructor method. And so here we've got, let's see, a new operation name here in the disassembly of Dunder and Knit. Here we have store attribute. Cool. So we're starting to get familiar with some of these new operation names. In my experience, a lot of the times they're self-explanatory. But if you're ever curious, okay, I don't know what that code, what that operation name does, just go to the disk documentation. It's all laid out. Another thing we can disassemble if we're using Python 3.2 or newer is a string that contains valid Python code. So we don't have to actually put that code in a module. We can just use it, disassemble the string directly. It gets compiled to a code object and then that code object gets disassembled. So here we are just assigning spam and eggs on one line, which is a cool thing. Python lets us do. And we see a new thing, like unpack sequence. Also a pretty self-explanatory operation name. Okay. What about an entire module? Let's say I have a really simple module called Knit.py. It has one line. It says print the string, me. I can actually disassemble that straight from the command line by passing the m flag and the disk module and then the entire contents of that Knit.py. Cool. So now we see, aha, we're calling this function print and we see the argument to call function is like some number of positional and keyword arguments. That's what I was talking about before. But what we can gather from this is that we're loading on this constant and then we're calling the function print on it. Cool. I think it's cool. Anyway. All right. What about another way to dis a module? Well, as we saw, we can use code strings. We can disk code strings. So what if we read in the module using the open.read function? So now we have the whole contents of the module as a string and we can disk that. Cool. It's basically the same thing as last time. There's a little one less kind of return there, but essentially we're getting the same functionality. Good to know. And another way we can disk a module is by importing it and then dising the imported object. In this case, night.py got a little more complicated, we added this method is flesh wound, which always returns true. And as you'll notice, when I import nights, the whole module is getting executed at print snee. But in the disassembled byte code, we don't see any mention of the printing part. All we see is flesh wound. So when you do it this way, when you try to dis a module this way by importing it, it's only going to disassemble the functions in that module. Anything else that's there just kind of as a script is going to get, is not going to get put in the output of disk. So that's just something to know about the different ways of using disks. Okay. Is there anything else we can disk? How about nothing? What if we pass no arguments to it? In this case, we're not dising nothing. We're dising the last trace back, the last error, which is a cool thing because let's say I tried to print this variable spam, which I had forgotten to assign. So I get this name error, of course. If I do disk.disk with no arguments, I can see the byte code that tells me exactly where that error came from. So you see the arrow to the left of the operation names there. That indicates that, okay, when I loaded print, that was fine. I found print, okay. But when I loaded spam, I had a problem. So these are some different things that we can disk, which if you're like me, is just fun to just spend lots of time just dishing everything you can get your hands on just to see what they do. And apparently can also help you in solving some puzzle challenges that one of the sponsors has out there. But other than that, why do we care about doing this? Why do we want to do this if we're not at a conference where we get free USB power packs if we solve puzzles? Well, as we saw, when we use the disk.disk with no arguments, that's a really useful debugging tool because sometimes the error messages that we get from Python, although they're usually wonderful, sometimes they don't tell us everything we need to know. So for example, let's say I had a line in a really complicated mathematical code there that is dividing two, has two division operations on the same line. So ham divided by x plus ham divided by spam. That gives me a zero division error and it tells me what line in my code the zero division error came from, but it doesn't tell me whether it was eggs or it was spam that gave me the error. So if I dis the trace back, I can actually see that, okay, we were going through, we loaded ham, we loaded eggs, we did a true divide and there was no problem. Ah, okay, so eggs was fine, then we loaded ham again and we loaded spam and then when we did that divide, that little arrow says that's where the problem was. So I know that the problem in my complex mathematical computations is spam and that's what I have to go back and fix. So this can be a really cool debugging tool for certain situations and it can also be a helpful tool to solve puzzles, not just the kind that the sponsor has, but also the kind that I mentioned at the beginning, where we have this for loop which takes a lot longer outside of a function than in and yet in the source code it looks pretty much identical. So let's try and get a little bit more insight here by dising this outside function module and the run loop function from the inside function module and see how they compare. Okay. So we have outside function dot pi. Now we know a few different ways of dising a module. I'm going to choose the reading, the open dot read method and get a string called outside and then dis that. So this is now what Python sees when we run that outside function dot pi. Okay. I don't understand all of this. I don't necessarily need to. I can get a general sense of what's going on. We're loading this range function. We've got a really somewhat big number that we're loading in. Then we have this new thing, get itter and for itter. For itter that's our for loop there. So that's what that looks like to Python. Cool. And then inside of that we're storing I, I guess for each time we go through the for loop and then we're loading I because we had a really, really useful for loop in that code that we just saw. And okay. All right. Seems to make somewhat sense. Let's see how it compares with inside. So from the inside function dot pi file, what we care about is this run loop function. So I'm going to import that in. I'm going to call it inside just for convenience and symmetry. And then I'm going to dis inside. At first glance this looks pretty much the same as what we just saw. So let's see if switching back and forth really fast will tell us anything. Inside, inside, outside, inside. Okay. So what do we notice? Differences. Well, first of all on the left hand side we notice that some of the line numbers are different. That's because we had one extra line in the inside function. We had that function definition. That's probably not important. What else we got? Aha. With the range function in one case it's loading, oops, it's loading as a name. In one case it's loading as a global. All right. Maybe there's some difference there but we're only doing that once. So oops. That's probably not that big of a deal. What we probably care more about is what happens inside the iteration. So after that for iter. And here we see, okay, when we're doing inside we're using something called store fast and load fast. And we're doing outside its store name and load name. See 16 and 19 there. So I don't know what those mean. Store fast sounds like it would be faster. And load fast sounds like it would be faster. But I don't know why or what these do. So how can I find out? So I can investigate by going into the disk documentation where it has a list of all of the different operation codes and tells you what they do. I've just copied those over here. Okay. Store name. Let's see them names. Name is code names. I don't know what that is. All right. Load name. It's using code names again. Okay. So it looks like store name has something to do. It has to look up something with an index and then it goes find the attribute. And so maybe that's something that could be possibly slowing us down. Whereas store fast and load fast, they're using something else called code names. And we don't see anything about looking up indices and whatever. So that might have something to do with it. This is starting to get me on the right path. And if you're really interested in digging in, if the disk documentation hasn't answered all of your questions, you can go right to the beating heart of Python and dig deeper into C of L.C. which is where the Python interpreter processes all of these different codes. And there's a really cool talk by Allison Capter called a 1500 line switch statement powers your Python. This is true. There's a huge switch statement where it's telling C Python what to do with all the different operation codes that you might encounter. And so if we look at the actual code for those operations, load fast and load name, we see that load fast is like a little bitty thing. It's like ten lines. And it involves a look up into an array called fast locals which sounds fast because it is fast. Load name on the other hand, first of all, it's more code. It's longer. It's more complicated. It's about 50 lines. And it involves a dictionary look up which is quite a bit slower. So it turns out that one of the main speed differences here which is a little bit tangential to the byte code discussion is that when you have code inside of a function, because when you define the function, you know how many variables you need in that function, Python can just assign a fixed length array. So when it needs to look up something in that function, it can just index into that array and pull it out really quickly. Whereas when you have it in the global scope, it doesn't know. You might assign new variables all the time. So it keeps things in a dictionary. And so looking up from that dictionary is a bit slower. Anyway, then there's another thing called opcode prediction which makes it even faster if you combine certain operations together because C Python can predict what's coming next. And it has an idea. It can save some ticks by doing common operations that always go together by predicting it in advance. So the combination for it or store fast happens to be one of these predicted combinations. It moves a lot faster than combining for it or store name. So if you're curious, I saw so strongly suggest you check out this really cool stack overflow conversation. Why does Python code run faster in a function? And Allison captures talks which talk a bit more about how we can start exploring this giant switch statement that tells Python how to interpret all of these different operation codes.
Anjana Vakil - Exploring Python Bytecode Do you ever wonder what your simple, beautiful Python code looks like to the interpreter? Are you starting to get curious about those `.pyc` files that always pop up in your project, and you always ignore? Would you like to start investigating your Python code's performance, and learn why some programs you write run faster than others, even if the code looks more or less the same? Have you simply fallen so completely in love with Python that you're ready to peer deep inside its soul? If you, like me, answered "yes" to any of these questions, join me in an illuminating adventure into the world of Python bytecode! Bytecode is the "intermediate language" that expresses your Python source code as machine instructions the interpreter (specifically CPython, the "standard" interpreter) can understand. Together we'll investigate what that means, and what role bytecode plays in the execution of a Python program. We'll discover how we simple humans can read this machine language using the `dis` module, and inspect the bytecode for some simple programs. We'll learn the meaning of a few instructions that often appear in our bytecode, and we'll find out how to learn the rest. Finally, we'll use bytecode to understand why a piece of Python code runs faster if we put it inside of a function. When you go home, you'll be able to use bytecode to get a deeper understanding of your Python code and its performance. The adventure simply starts here; where it ends is up to you!
10.5446/21104 (DOI)
Hi, I'm Anjana Vacchial. Hello. Hope everybody's enjoying the week so far. I don't know about you guys, but I've seen a lot of slides this week, so today I figured I'd do something a little bit more experimental. No slides. I'm just going to show you some code, some really, really silly little code examples. Please do not take them seriously. Don't write code like what I'm going to show you today. But my hope is that they'll illustrate some of the fun special dunder or double underscore methods here, which are also called special methods, magic methods, but I like the term dunder the best, so I'm going to call them dunders. And as I mentioned, I put up all these examples in a little repo, vacchialay. And what I'm hoping is that at the end of the talk, to have a couple extra minutes to discuss with you guys, I'd like to do this more interactively if people have other dunder tips and tricks that I am not able to cover, because there's too many wonderful things to fit into 20 minutes. I would love to have people discuss them afterwards, and possibly even after the talk, if you can contribute to this repo, add stuff to the wiki, open issues, start discussions, and if you have fun little code samples that you want to add, there's a directory in here, sharing is caring, where you can put in whatever you want, and file PR, and I'll put it in there. So that's what I'd like to do today, because dunder methods are super fun, if you ask me. So what are the dunders? Well, they're these special methods and attributes surrounded by double underscores, which is why we call them dunders. And some of them are our best friends, right? So everybody probably uses dunder in it all the time, it's our basic constructor method. So here I have got a custom class, I'm going to call it a stringy int, and it's going to be a weird kind of number. I'm constructing it with this dunder in it, I'm giving it an attribute called value, and then another dunder everybody probably is already super comfortable with is dunder str, maybe there's a better way to pronounce that, I don't know, where we can have whatever kind of string representation we want. Probably it's just like the value of the object, but this one's going to be more exciting. It's going to have, oh my god. So this is why you should never live code in presentations. So if we run this little module, we get to use another dunder that's super fun, which is this beloved if dunder name equals dunder main block, which as you all probably know is run only when you run the Python module itself and not when you import it into something else. So that's cool. Let's run a little interpreter, and I've set up some stringy ints here, so we've got one is a stringy int object, and two and three. Awesome. So if I print them, since I've got my dunder str, I have fun string representations. And so another dunder that we all probably know and love is what is giving me this weird Python objecty looking string representation here, and that's dunderrepper, which if I uncomment this here, is supposed to be another type of string representation that's more the code object itself. It's supposed to be more unique to the object. In this case, I'm just going to have it be more boring and just print the string of the value. And so we can see that when we quit this and do it again now, now when it evaluates one, it prints out just the value. So I don't have to look at all this gobbledygook addresses and whatever. Okay. So far, so boring. We all probably know these, no one loved these dunders. What about other fun stuff? So when I have regular integers, I can add them or multiply them. That's because the built-in int object has these fun operator dunders, like dunder add and dunder mole, which are used by the plus and the star asterisk operators to perform these mathematical operations. And I can overload them in my own classes by just implementing these methods. Super cool. So let's see here. What happens now when I have my one and my two? And usually if I add two integers together, I get, you know, reasonable things, like three. But if I add my stringy ints together, ah, it's giving me an integer that's just mushing them together like strings. This is a completely useless class. You probably don't ever need to implement anything like this. But the point is, you can because Python magic. So what if I want to add a stringy int to a regular int? Mm, doesn't work. The operand type is not supported. That's because it's looking for a dunder add function on the integer to the left of the plus operator that works with a stringy int object. And it doesn't find one. So if it tries to find the dunder add method on the left-hand side object but doesn't find something that works for the right-hand side object, Python will also try looking for these special R methods, which are like the sort of opposite implemented for the right-hand side object, like for example, dunder add, which is possibly the best-named method in the world. So this one will be called on the right-hand side object as sort of a fallback if it can't use the regular dunder add on the left-hand side object. So now we can see that if we try 1 plus 1, aha, it works now. We didn't change anything about the built-in int, but we use this dunder add, and it is super add. I'm sorry, I can't help myself, you guys. One other operator that is a little bit different is this equal equals, right? We all know and love it. One is one, and the stringy int one is itself, but it would be cool if we knew that those two things were similar somehow. So I can do that by implementing dunder eek, and there's equivalence for all the, you know, less than, greater than, etc., etc. In this case, I'm going to try to make it work for anything that I can intify. So again, don't write code like this. It's just an example. So now if I try, okay, one is still itself, and my stringy one is still itself, and now, hopefully, oops, wrong direction, aha, now it knows how to compare these two different types. So, the dunder eek function also is a bit special, because it's also used for, for example, making objects hashable so that it can be used as keys in a dictionary. So if I have a dictionary d, right now, if I want to make an integer a key, that's no problem. We do that all the time. But if I want to try using one of my stringy ints, it says, oh, no, it's unhashable. Well, we can fix that, of course, with a dunder, called dunder hash. Oops. So if I implement this dunder hash function, what I want to do is return a unique hash value for whatever this object is. In this case, I'm going to do a really silly one, which is returning the integer value itself. Ideally, you'd have something better than this. But the important thing is that you don't want to implement this kind of thing on an object that shouldn't be hashable, like a mutable object. You don't want that to be a key in a dictionary. But these are not mutable, so I can use it. So let's see now. If I have my dictionary d, okay, I can still use my regular integer keys. Let's try now. Oh, I overwrote it. Sorry. Aha. Okay. So now I've got, it's not complaining about the hash ability. And because of my dunderrepper function, it's difficult to see this. But if I do for key in d keys, it really is harder to type when you're up here, right? Let's print the type of each key. Okay, I see I've got one int and one stringy int. So it's working. So we can now use this new custom type as a hashable type for dits. All right. One other thing I wanted to talk to you about with these stringy ints is a fun little dunder that's an attribute, actually, not a method, which is dunder slots. So dunder slots is a bit different. When I have an object like a custom object, usually I have a dictionary dunder dict that stores all of the attributes for that object. So that's what allows me to do one dot value and get something out. And if I assign something new to the object, if I look at the dict, I see, aha, I got added to this dictionary. It's just a regular dictionary. You can mess with it however you want. But the thing is that dictionaries take up space. And so whenever you create a new object, Python gives you this dunder dict for all of the object's attributes. And it might be the case that for an object like stringy ints, you know that it's never going to have more attributes than value. It's only going to have that one. Or maybe you just have a small set of attributes. And if I'm going to be creating, like, millions and millions of stringy ints, creating all those dunder dict dictionaries could take up more space than I want to use. It takes up also a little bit of time. So what I can do is use this, declare this dunder slots and name out all of the attributes that I want on my object. In this case, it's just value. And what that does is prevent the dictionary from the dunder dict dictionary from being created. So if this is right here, if I try now accessing the dict directly, ah, it doesn't have one. Does it still have its value? It does indeed. Can I add a new attribute? Oops. Nope, I can't add extra attributes. So it's basically constraining the size and shape of this object in a way that if you're creating gajillions of objects, that efficiency might actually come in handy. So I thought dunder slots was pretty cool when I heard about it earlier this year. All right. So let's see. Time flies when you're having fun with dunders. There are a lot of other fun dunders that we can use to make container objects, for example. We already saw how we can make objects that are kind of simpler, like numbers, numeric types. But what if we want to make things that have contents? So for example, let's say I want to make a list, but I find lists really boring because when you add things to them, you append things, you know they're going to show up at the end. And when you index things, you know that you're going to find the right object for the right index. So I'm going to add a crazy list where there's just an element of randomness. So in this case, I've made a silly little object called a crazy list, which I've got my dunder init, I've got my dunder wrapper, I'm adding a little append method just because all good lists need one. But instead of, you know, appending things to the end of this self.values list, I'm just going to insert them at a random place. Because why not? And so again, I'm using my dunder name thing in here to run some code right when I run the module. Let's see here. All right. So I've got an L object and it's got some elements in it. I want to find out how long it is. Oh, no, it has no length. That's not good. What should I do? Probably use a dunder. In this case, the one I want is dunderlen. This is what's called by the built-in len method. So a lot of the built-in methods that Python, that we were used to using in our super beautiful Python code, they depend on these dunders. And one really simple one is dunderlen. So in this case, instead of telling you actually how long the list is, I'm just going to sort of give you a random number that's somewhere in the vicinity of its length. Super useful, right? I hope you guys noticed the word abusing in the title of this talk. So now, if I try to lend my object, aha, it gives me a completely wrong number, but at least it gives me a number. Sweet. Okay. So there's another, let's see, I'm going to try and skip ahead here. What if I want to do four item in my list, you know, print the item? Oh, no, it's not iterable. That's not good. How do we fix it? How do we fix it with a dunder? Yay! In this case, dunder iter. I skipped dunder bool here. That's used for, like, if l. You guys can get the idea. But dunder iter is probably more important. If you're trying to make some kind of sequence object or something that you should be able to use in for loops and that sort of thing, you're going to want to implement this. And how it works exactly is a little bit more complicated than some of the other dunders, but in this case, I'll just point you to the documentation because we're running out of time. And in this case, I'm just going to kind of yield a random element from the list in the completely wrong range of the number of elements that may or may not be in my list. So really useful dunder iter function here, but hopefully you get the idea. So now let's try to do our for i in l, print i. Ah, okay. So I have a little thing in here that shows when dunder iter is getting called. It got called for that for loop. And it's doing something really useless, which is not only is it printing the wrong number of items, but it's also just printing question marks sometimes because mystery. But the point is that we can use this now in for loops. So if I wanted to define a really super useful dunder string method that uses a for loop to print out all the things in the list, I could. Okay. But what about, so usually in a list like l, I want to be able to get, you know, a certain element by using this bracket notation, but it doesn't support it. If only it did. If only we had implemented dunder get item, which lets us pass an index or a key to these brackets for our object. And so depending on whether you want your object to be indexable using integer indices or you want it to be like a keyed item, like a dictionary, you can define this method to let's say look for a certain type, like only integers or to just handle anything, which is what I'm doing here, is I'm not even caring about what the key is or the index is. I'm just going to accept it and just give you a random item from the list no matter what you ask me for. So now I have my l, if I want a certain index, yeah, sure, whatever. Yeah, it's totally working. Great. And what if I want to try to access it as if it were a dictionary? Sure, no problem. Yeah, we'll just give you random things from the list. The point is if you were to make an actually useful dunder get item method, then boom, you've got a dictionary or an indexable sequence, whatever you want. Okay, and there's also a dunder set item for the equivalent, which you can imagine for setting a certain item at a certain index or setting the value of a key. Okay, one other dunder that I want to talk about with containers is dunder contains. So if you have an object where you're going to be wanting to test for membership, like if x is in l, if you don't have dunder contains implemented, what you'll see is if I do, let's say I want to see if 2 is in l. If you don't have dunder contains implemented, it's actually going to use the dunder iter and go through everything in the list and see if it finds something in there that's the thing you're looking for. That can be a bit slow depending on various features, like for example how far close to the beginning of the list the item you're looking for is. So if you implement dunder contains, this can be, doesn't need to be, but can be a faster way of testing for that membership. So depending on what your use case is, if you need something that you can really quickly decide whether something is a member of, like a set, let's say, dunder contains can be a good idea. So now we see that if I ask, okay, is 3 in l, see how my dunder iter didn't get called? I had that little print statement there. It's using dunder contains first. Okay, so a crazy list that's completely useless, but nevertheless showcases the magical container dunders. One last thing I want to show you, and this is probably my favorite, are some fun function dunders. So I have a little function here called add. It's super boring. It just adds two things, spam and eggs, whatever. But because Python is magical, since I have this doc string in here, I can, if I ask Python to help me out with add, it tells me, okay, cool, the contents of that doc string and information about the function itself. Sweet. And when I add two things, it, you know, it does what the function says to do. But what if that's not cool enough for me and I want to hack my little function on the fly? So this is probably something you should never, ever do. But if you were a cat, you wouldn't care about adding two numbers together. You would want your human to give you more tuna. So what I've got here is a little function which makes use of some fun function object dunders, dunder doc, which contains this doc string. I'm going to change it to something more cat specific. Dunder code, which is actually the code object, the content, the functionality of your function. You can actually mess with that. You can replace it to be, for example, the contents of another function called more tuna, which instructs the human to give more tuna. So now, if I have my regular add function, okay, add still good, add it still works. Okay, if I now catify it and I try to add two things together, it actually changed the functionality that's attached to this add name. And similarly, the doc string is different and useless. This is something that you probably don't really ever want to do. But the important thing is that you should know that you can. And so if you ever see this kind of messing with going on, be really careful. All right, last thing, hopefully I have time for one more dunder. The with keyword works with two special dunders called dunder enter and dunder exit. And this is what allows us to define a context manager. So basically, what happens is this dunder enter method is called whenever we enter a with block, and it can set something up for us. For example, if you use with open file, it will do some things like read in the file object and when you exit the with block, it calls this dunder exit method, which can do something useful like closing the file object. In this case, we're going to have it do something less useful, which is it's going to catify a function on the enter, the dunder enter, and it's going to uncatify the function by replacing the original dunder code and dunder doc with the boring human code and the boring human doc on the dunder exit. So what we should see here is that now if I have my regular add function, it works. What if I do with cats in charge of add, and then I try to add two and three, and let's do something else too. Let's just call the help and let's add, I don't know, four and five and six. So what I'm doing on the dunder enter is catifying this add function. So we should see that any add calls I make in the middle of this block are the catified version. But when I exit it, I'm setting it back so any add calls I make after this block should be normal again. And just because maybe you guys want something that's actually a tiny bit useful out of this talk, I'm also putting in a timer here. So I'm timing when the cat reign begins when I go into the dunder enter method. And then I'm logging the time of when the cat reign ends when I leave in the dunder exit method and I'm printing out how long the cats ruled for. So this is an example of something you might actually want to do is write your own timer, for example, to time an arbitrary block of code. Okay, so let's try it out. Ah, okay, so it called help and it's meowed. And we saw that it called add twice. And I'm actually using the numbers of the arguments to determine the Rs on per here. And then it told me how long the cats were in charge. So just a little example of some things that you can do with these context managers, depending on what you're trying to do with your code, it can be a really useful pair of dundas. All right, so that's all I wanted to tell you about the dundas that I find cool. I would love to hear now if people have other ideas for dundas you find cool. And just before we kind of open it up to everybody, I just wanted to point out that the documentation for the Python data model has information about all kinds of dundas that you could possibly want to know. So if you're curious about any of these, yeah, check that out. All right, what do you guys think? Dundas to share? Hello, thank you for your talk. If you overwrite dunder all in a module, then you overwrite what can be imported. So you can just say, oh, all is just these three functions. Okay, dunder all is not in this thing, but that sounds really awesome. So if you underwrite dunder all, you can overwrite what's been imported in the module. What can be imported from the module? That's cool. So is that something I should be able to access right here? Like is it a? Try it. No? Okay. Is it something that would be in the vars? Like in the? I've only ever written it. Okay, no. Where would it be? Anybody? If you put it in your module. Just anywhere? Should be a list. Okay, so if I want to say that only we can only import catify, like is it what I do it like that or would I use the object itself? Okay. Does the order matter? Does catify need to come first? Okay, sweet. Let's see. So if I, no wait, sorry, this is what would happen if I import from that. Okay, so let's try import from catification. Import ad shouldn't work, right? Oh. Okay, all right, all right, all right. So if I do from catification import star, then ad should be a-ha. But catify is a thing. Cool. Thanks. Anybody saw that? Anybody else want to share stuff? Yeah, and it'd be cool if whoever just mentioned that could like put a note in the wiki or add a little example. That'd be awesome. Sweet. Anybody else want to share something? Well, if you're looking for dundas that you can do crazy things with, the ones that- Aren't we all? Yeah, the ones you look for is dunder new. Dunder new. Yes, which if you want to do crazy things, normally you would return your object instance, but you can return whatever you want. So for instance, if you return the integer 42, instantiating your class will get you the integer 42. Sorry, if I run the- If you're in your dunder new in a class, if you put return 42 at the end, instantiating that class will get you the integer 42 and not an instance of that class. Okay, so if I, in my stringy int, if instead of an integer I want, whatever integer I wanted, I just want always the answer to life, the universe and everything, I could do dunder new, self, whatever, does it call with the value? Not self, you get class. Sorry, no, yeah, no. CLS. Yes. And you get the value also. Okay. So it'd be like this? Yes. And I could just return 42? Exactly. And now, if I run this, I should have, I had like one, should be 42. Exactly. Awesome. Very cool. Thank you. And if you do type on one little side, it's an int. So tricky. Thank you. Yeah, if you could put that in the repo, that'd be awesome. Thank you. Yeah, so something I actually find really useful is if you still live in Python 2 and you don't have the LRU cache, yeah, I live in Python 2, sorry about it. You can use the underscore underscore, sorry, the dunder missing. You use the dunder missing, you get the cache out of a dictionary in five lines of code. Sorry, could you say that again? Yeah. If you inherit from dictionary and you implement dunder missing, whenever an item is not found, you can specify a function. Okay. And that's a cool way to implement a cache. Awesome. So like, I imagine that would also be useful for things if you want like default values, like a default dict or something with that word. Yeah. Very cool. Dunder missing. Yeah, if you could add that too, that'd be awesome. You can implement method call and dunder call to call object as function. Right. Okay. So if I have like an integer, shouldn't usually be callable? Huh? You need to. It has to be after. Method new. Right. Let's replace it. Okay. And it takes what? Self, right? And any arguments? Can we like just take however many we want? Will that work? And I don't know. I'm just going to print like, yay dunders. And return, I don't know, 42. Okay. So now if I have one, okay, it's my number. It's still hopefully a stringent. Cool. But I should be able to call it. Yay dunders. Sweet. Dunder call. And you can implement method and dunder get, dunder set, dunder do, and dunder delete. Right. Right. So yeah, we saw like some of the getting and setting, but there's, there's a bunch of other dunders like delete and whatever, which you can do special cleanup code or whatnot that you need. Very cool stuff. Thank you. I don't know how we are for time. Are we? So actually the lunch break started. So yeah. All right. Well, thanks everybody for sharing. And yeah, if anybody wants to contribute to my little dunders repo, I'm hoping that it can be like a conversation starter. So go for it. Thank you. Thank you.
Anjana Vakil - Using and abusing Python’s double-underscore methods and attributes The curious Python methods and attributes surrounded by double underscores ('`__`') go by many names, including “special”, “dunder”, and “magic”. You probably use some of them, like `__init__`, every day. But that’s just the tip of the iceberg! In this talk, we’ll explore the weird and wonderful world of the double-underscore, and find out how dunders can be useful, silly, dangerous, and just fun! We’ll play pranks on Python’s builtin operators for arithmetic and comparison. We’ll make arbitrary objects behave like dictionaries and containers. We’ll reduce an object’s memory usage, and speed up tests for membership. We’ll even try some naughty function hacks that we should never use in real life! You'll get the most out of this talk if you're already comfortable writing object-oriented Python code. If you already use special dunder magic in your own code, that's excellent! You’ll have a chance to share your tips & tricks with the rest of the audience at the end of the talk. _Talk repo_:
10.5446/21105 (DOI)
lost. So our next speaker is Ankit Bahaguna and he'll be talking about query embeddings, a web scale search powered by Deep Learning and Python. Thanks a lot. I will be talking about query embeddings, which is our system which we have developed at Klix, which does use deep learning and the system is entirely built in Python. A bit about myself, I'm a software engineer in research at Klix. I have a background in computer science and natural language processing and deep learning. We are building a web search engine which is part of a web browser and also the browser works on mobile too. The areas that interest me are NLP, information retrieval, deep learning and I also am a modular representative since 2012. So about Klix, we are based in Munich, it's majority owned by Hubard Buddha Media. We are a national team of 90 experts from 28 different countries and we combine the power of data, search and browsers so that we are redefining the browsing experience. Our website is klix.com and you can actually check out our browsers. So here I'm talking about search, so I'll start with it. So when you open your web browser, what you usually do is you go for a link or you go for a search term. What Klix experience gives you is a web browser with an address bar which is intelligent enough to directly give you the site based on what your query is. So say if you are searching for something like Python, Wiki, you will get a Python, Wiki link, you want to search for weather in Bilbao, you will get the weather snippet and interestingly I found out that on Monday, that's today, it's 41 degrees, so take care. And of course, if you want to search for news, you will get real time news. So it's a combination of a lot of data built into a browser with the technology of search behind it. So it's all three things combined. So a bit historically about how traditional search works. So traditionally search for us, so search is a very long-studded problem. And by search I mean information retrieval of the web search. And what they used to come up with was create a vector model of your documents and your query and then do a match at real time. And the aim of the whole process was to come up with like the best URLs or the best documents for the user query. Over the time what we found out, like search engines evolved, the web became rich, there was a lot of media which came in and people expected more from the web. To come up with our search story, search at clicks is based on to match user query with a query in our index. And our index is based on query logs. So if you type Facebook or FB, it has to go to Facebook.com. Given search and index, you can actually construct a much more meaningful search result experience for the user because it's enriched by how many times people actually query and lead to the same page. So what we aim is we construct alternative queries given a user query. So if we find it directly, it's great. But if something which is different or we have not seen before, we would try to construct them at runtime and try to search for those results in our index. And broadly our index looks something like this. So you have a query and it has URL IDs, which means a URL ID is linked to some hashing value. And that URL is the actual URL that people go to given the query. And there are these frequency counts and everything which actually allows us to make a prediction, okay, this page is the rightful page that the user actually intended. To give an overview of the search problem itself in a bit more depth, the search problem can actually be seen as a two-step process. First one is recall, the second one is ranking. So given your index of like billions of pages, what you try to aim at is like get the most best set of candidate pages that you can say, okay, given a user query, that should correspond to them. So say I want to get the 10,000 pages, 10,000 URLs from my billions of pages which best fit the query. And then the problem comes up is the ranking problem. The ranking problem means given these 10,000 pages, what we want is give me the top 10, 100, three results. As you might know, that given any search engine result page, the second page is a dead page. So everybody concerns about the first page. So it's very important to have the top five or top three results as the best result for your query. And that's all we care about at Clix. At Clix, what we want is like, given a user query, we try to come up with three best results, some are two billion pages in the index. So where does deep learning come up? So what we aim at Clix is like we're trying a traditional method of search using fuzzy matching the words in the query to a document. But then we're also utilizing something which is a bit deeper and a bit different, which is using something called semantic vectors or distributed representation of words. What we actually try to do is we represent our queries as vectors. So a vector is like a fixed dimensional floating point list of numbers. And what we try to do is given a query and given a vector, that vector should semantically understand the meaning of the query. This particular thing is called distributed representation where the words which appear in the same context share semantic meaning. And the meaning of the query is defined by this vector. These query vectors are learned in an unsupervised yet supervised manner where we focus on the context of the words in the sentences or the queries and learn the same. And the area that we actually study this thing is called neural probabilistic language model. Similarity between these queries is measured as a cosine distance between two vectors. So if two vectors are close together in the vector space, so they are more similar. And hence what we do is we try to get the closest queries based on which are the closest vectors in space are to the user query vector. And this gives us a recall set or the first set that we can actually fetch from our index which most accurately correspond to our user query. So a simple example of to illustrate this is like say a user types a simple query like sims game PC download which is a game. What our system actually gives us is sort of a list of these queries along with their cosine distance to the query vector that user typed. So given the query sims game PC download, we get sort of a sorted list where the first one is like the most closest to sims game PC download. Bear in mind like it's a bit different to understand because you're not doing a word to word match but a vector to vector match. So the vector for the query sims game PC download is much closer to the download game PC sims. Now this is coming from our search back end which is a bug of words because we want to optimize the space as well. So eventually the vector comes out to be the same. And the values on the right are the cosine distances. So as we move down the cosine distance increases and we'll see like we'll start getting some a bit far off results. So we are usually concerned about like top 50 closest queries that come through this system. So a bit more about how this learning process works and what we actually utilizing in production is we use something called unsupervised learning technique to learn these word representation. So effectively what we want to learn is like given the continuous representation of the word you would like the distance of like two words CW minus CW dash to reflect a meaningful similarity. So for example if there's a vector like king and you subtract that like a vector of man and then you add a vector of woman you probably get a vector which is close to vector of queen. And the algorithm that defines this is word to work and we learn this representation and the corresponding vectors. So a bit more about word to work. It was actually given by Mikhailov in 2013. They had two different models where continuous back of words representation and continuous skip ground model. This we focus on again distributed representations that are learned by neural networks. Both models are trained using stochastic gradient descent and back propagation. A bit more visual indication of how this works is like in a CW or continuous back of words model on the left we have like a context of five words and we want to try to predict the center word. So given like the cat sat on mat the word sat has to be predicted given the other context words. And the skip ground model does the exact reverse. So given the center word in the sentence or context window you try to predict the surrounding words. So given these two models you can actually like define these vectors for each word that you see as a lookup table and you can learn them using stochastic gradient descent. I'll probably skip this because it has a lot of math in it but still. So what we try to optimize is a neural language model tries to optimize given how many times you see a particular word given the context and given how many times you see a word not in its context. So a best language model will actually say okay given a certain sequence of words you'll see the next word and given a certain sequence of word you will not see a certain word. And that's what the model actually learns. And this is one of the examples of how a traditional language model actually works. So for example this the cat sits on the mat. You try to predict what is the probability of mat coming after the sequence in a certain vocabulary dictionary that you have. But the only cache here that we have to worry about is like your vocabulary could be very, very huge. So what you might look at is like you want to try to predict a probability of a word. Say you have seven to ten million words in your vocabulary. You want to predict the probability of your one single word across all of them. So to avoid this scheme what we use something called noise contrastive estimation. We actually don't use the entire vocabulary to test our word against. What we do is like we say okay we pick a set of five noisy words or ten noisy words. So for this particular sequence the cat sits on the mat. You're pretty much sure that the mat is the right word but so can be other words. But then say the cat sits on the hair or something like that. So these words will not be the exact sequence that you will find in your life. And you can pick those words at random from a uniform distribution and get these noisy words as your training examples. So what effectively a model learns right now given the sequence what is the right word to get next as a next word. And given the sequence which are not the right words. So if the system differentiates over and over again with millions of examples and you train this over certain iterations you'll probably get a model which is able to differentiate the position of the right words with the position of the bad words separated with a clear distance. So let's see like how this will work with an example itself. So for example there is a document like the quick brown fox jumped over the lazy dog. And we have a context windows size of one. We say okay given like the first three words the quick brown fox I have the center word quick and the surrounding words as the and brown. So I want to get in a continuous block of what model what I want is like can you predict quick based on the and brown. So it's just like a very simple example. But at production we found like skip brown does much better. So effectively what we tried to find out is like we tried to predict the context word from a target word. So we predict the and brown from quick. So given quick predict what is the probability of the predict what is the probability of brown. And the objective function is defined over the entire data set. So whatever data set we have our data set is built on a lot of Wikipedia data a lot of query data title descriptions that we have and a lot of other textual data that we have to actually learn how the queries are formed or how sentences are formed or what is the sequence of these words and use as duty for this. Say at ring time T we have like a certain case. They have quick and the and our goal is to predict the from quick. So we select like some noisy examples say like same num noise is like one and we say sheep. Sheep should not be like part of this. So next we compute a loss for this pair of observers and noisy examples and we get this objective function. So what we try to do is given this which is like log of the value of the score. So given the probability which is the correct sentence or correct piece of context the and Q should be given a score of one and given like a quick and sheep the score should be zero. So if you update the value of theta because that depends on it we can maximize this objective function as like a log likelihood and we can actually do a gradient descent on top of it. So we perform an update on the embeddings and we repeat this process over and over again for different examples over the entire corpus and we come up with like a lookup table for words and vectors. So we can define the dimensionality of a vector as I said in my slide that we use 100 as dimensions to represent that word and that's pretty well for us. So how do these word embeddings actually look like or what we have actually learned is something like this. So if you see like the word vectors or like you project these vectors in space what you find is like the vector for man and woman is roughly equidistant from like thing and queen and you'll find this not just variation in gender but also variation like verb tense like walking and walked and swimming and some because you might have sentences where like the guy or the person is walking and the person is running or he walks or he runs would occur in the same context and this is what the model actually captures pretty nicely and not just that we actually also have like some other informational features like countries and capitals like Spain and Madrid or Italy and Rome, Germany and Berlin. So these are like country-capital relationships. This is like a projection on a 2D scale using TSNE where you actually can see, I mean it's a bit short but you can actually see like some characters here at the bottom and here on the top you'll have like May, Sherwood, some here like more less, some more adjective identifiers and this is like a projection that you can see if you see the more semantically meaning words are actually closer in vector space and this is a very important property because if you can try to leverage this and construct like sentence or document representation you'll probably get like similar documents in space as well and that is what query and variance addresses. So the way we generate a query vector using these word vectors is like for the same query since KMPC download we have a vector for each of these words, what we do is like we just don't use these word vectors as it gets the term relevance and term relevance for us is a bit sort of a custom process that we come up with but actually what you see is like you'll get a score for each term in the query. So this tells us like Sims is the most important relevant word in the query because it's the name or the name identifier and next week what we do is we use this term relevance and also a vector to calculate like a vector average of these vectors. So what a weighted average actually means is like say given two vectors of two different words and their weights or their term relevance you do a numpy average and you'll get like an average representation of those words and effectively what we actually say a query vector is this average representation. So given our vector and the term relevance we get like this average representation and this represents our query vector. So effectively at the end Sims can PC download is nothing but this 100 dimensional vector and that is what we use as our query vector. A bit about term relevance so we have two different modes of term relevance usually it is the frequency of the words that you find but it's not very good for scale also like you use something like TFIDF or these sort of representations but what we have used is something called TF5TF is like given the number of queries linked to a page how many times that term has occurred in those top five queries and that's a much better indication to us given the data that we have that we can roughly say that given the number given the word statistics give me this number and give me the knock-in frequency I'll get something like an absolute term relevance and the relative one is actually sort of a normalization on the all the pages that we have in our index. What we found out is like if you normalize your scores across all the pages of your index the vectors are slightly better and get slightly better results and these all are data dependent we compute them on the fly each time we refresh our index and for example this looks something like this for each word you'll have like features like frequency, document frequency, you have U, QF and all the other stuff and similarly for all the other words as well. So what we actually create is like a query vector index now so given a traditional index which has all the documents we have all the queries and their vectors and we try to do a query vector lookup so we cannot do this for all the queries because there are just too many queries so what we found out is like given all the pages are index we can actually just pick the top five queries which effectively represent the page and we call them as top queries and from the page models we can actually get this data so roughly we come up with like 465 million queries which represent all the pages in our index and we try to learn a query vectors for each one of them and if you just like dump the whole system on disk it's like around 700 gigs and what we actually have the problem now is like how do we get similar queries from these 465 million queries so given a user query find me the closest 50 queries from this 465 million queries so how do we find closest queries should we use brute force it's too slow it's too too slow we cannot use hashing techniques that effectively because it's not very accurate for vectors because these vectors are semantic even a small loss in precision could lead to like here are results so what a solution actually required was the application of a cosine similarity metric somehow we could show you have to scale for like 465 million queries and take 10 milliseconds or less so the way we came up to the answer was something called approximate nearest neighbor vector model and they were actually pretty helpful for us so what the model that we use is called anoint it is a C++ and Python wrapper that exists for this to build the approximate nearest neighbor models for all the vectors of queries that we have and now is actually used in production at Spotify and now it clicks as well we can train all on the 465 million documents at once but it's too slow because it is sort of memory intensive so what we do is like we don't train them all of them together we have a cluster where we actually host these models along with our search index so we train them as 10 models with like 46 million queries each and we train it on 10 trees what these trees actually mean I'll explain next and the size of the model is like around 27 gigs per shot 27 gigs per shot that what you get after training which is like around 270 gigs if you if you just scale it to 10 models and everything is stored in RAM because for us the most important thing is latency given a search you want the results to happen pretty quickly later I show a demo of how this thing actually is used in production and then what at runtime what you try to do is like you query all these 10 shots simultaneously and then start them based on what cosine distances that you get so your different parts of your shots might have different closest queries so eventually what you'd want is like you want is like the best representation of those queries which are closely matching the user query and where we actually found a nice cut off was like 755 55 is a heuristical number as how many nearest queries would be very good for the system that doesn't really like decrease our recall or anything or latency for that matter because this has a huge latency cost as well so by by first I want to actually explain like how we actually use an oil and how an oil actually works it's it's one of nice frameworks that you can actually use if you are using vector calculus or like using like something vector based approaches for your recalls or ranking and we try to find out the nearest point to any query point in like a sub in sub linear time so what you try to find out is like you cannot do it one by one so it's not worth and what you want to do is like try to do it in sublinear type can you get it get those closest queries and like log of n time and the best case data structure for that is a tree so given like all your query vectors are represented by like each point represents a single query what you try to find out is like say given a certain point which is the nearest point or like a user vector which is like a user query vector some random point on the scale space find me the nearest ones so to train that model first to build that tree what you do is like you just split this this type of space recursively so you split take two points at random and split the space you do it again and then you get something like a tree so you have like a certain segmentation of certain number of points in the cluster which are like in different parts of the tree you keep splitting and you end up with a huge binary tree the nice point about this binary tree is like the points that are close to each other in space are more likely to be close to each other in the tree itself so if you are trying to navigate through a node and you try to come up with like some child nodes that holds track or that whole branch would be composed of all the similar nodes in the vector space and this is a very important feature so how do we search for a point in the in the tree and these splits that we have built so say that X the red X is our like user query vector and we try to find out which are the nearest vectors to this particular vector and give me the queries related to it so what you do is like you end up with like you search for a point and you just jot down the path from the binary tree and you will get like these okay seven neighbors that you get and you use like a cosine matrix so how close it is if it's very close to like between zero and point five it's much much more closer if it's more than one because cosine could take values between minus two and two so then you can actually decide okay how close your vector is but here what the problem is like you only see like seven neighbors coming what if we want more neighbors what if we want more than seven closest queries so what we use is something called we don't just navigate through one branch of the tree we can actually navigate to the second branch of the tree and this is maintained in sort of a priority queue and we can actually traverse both the parts of the tree and get like these closest vectors and so you don't not only like look at the right with a light blue part but also like a slightly darker blue part so you see both the sides of the tree because that's where the split occurs and you can find okay both of these sort of areas in hyperspace are like closer to the user vector but sometimes you'll find like because we did it randomly what happens is like you can actually miss out on some nice zones because you just split across two different points so what you do is like to minimize this you train a forest of freeze and it actually looks something like this so you not only like train on like a certain sequence of splits but you randomize those across say ten trees so effectively your model learns these ten configurations at once and searches for them in real time in parallel and this gives you like a pretty good representation because when you saw them and get like good query representations you'll get like some good similarity between queries so between a forest of freeze so one, one bad or like a missing feature in an I or like maybe it's a feature not a bug is like it doesn't let you store string values but it actually allows you to store indexes so you can actually store like for a query same game PC download give this like a unique index say like five or one and that one will be stored with the vector and that model will be learned so when you query an I you'll get like an index back of all the indexes which are close to it so what we what we have at clicks is like they have developed a system called Kiwi which is like a key value index which is also responsible for our entire search index we found it is much better than Redis or anything to compare with in terms of reads and maintainability we developed it in house it's written in C++ with Python wrappers again and it actually stores your index to query representation so what you effectively see is given a user query you'll get a query vector you search within the annoying models the closest query vectors you'll get indexes for these then you query the Kiwi index you'll get all the queries and effectively you can fetch the pages for all the queries that are closest to the user query and this is how we improve our recall and the results are pretty pretty amazing in the sense that we get much richer set of a candidate pages after the first fetching step with like a higher possibility of expected pages among them and the reason it is going this way is because now we are going beyond synonyms are doing a simple fuzzy match but actually using how vectors are learned semantically it screws up sometimes but most of the time you'll find like there is a definite improvement because you'll you always try to learn those words which are near to the context and that's a very important feature and queries are now matched in real time using a cosine vector similarity between query vectors plus using the classical information retrieval techniques that we use at clicks and overall there is a recall improvement from previous release that we had was around 5 to 7% so it's the improvement that we find on internal tests that how much we are improving on this and translated improvement in the final top three results is around 1% so that gives us a clear identification of where these vectors are actually useful or not and the system actually triggers only for those queries which we have never seen before so that's also like a very very important point here because for the scene queries like FB or Google you actually landed to a certain page you are definitely sure about it but for queries which are not seen before which are new to us which are not in the index you have to go beyond the traditional techniques and this one technique actually helps a lot so before I conclude like I actually wanted to show like what the browser actually looks like so this is like a clicks browser and this is the search page and we actually have this snippet which comes up the idea of this was to reduce that whole step of search engine result page and you can actually get like directly to our page so the libraries are like Spotify and on which is again available on GitHub Kiwi which is clicks OSS and a GitHub that you'll actually find it's pretty useful it's pretty active project as well. VertiVec can be trained using GenZim if you want to do a prototype but I would recommend to use the original C code because it's a bit more optimized and we found like there are certain variations in like the models that are developed because of the common history that we see. There are other clicks OS projects that you can actually contribute to if you want to find the slides it is actually on speaker deck it's QE Python bit.ly slash QE Python. So before I conclude I'll just say this thing that we are still like working on this system we have like the first version of this thing ready but we are trying to look up at other approaches of deep learning like using something called long and short memory networks. The only downside of that approach is like most of these user queries are keyword based and you don't usually find people actually typing okay what is the height of Statue of Liberty those sort of things you'll probably say Statue of Liberty height and that sort of linguistic relationships maybe well captured by LSTMs they are more complicated but this system is like simple enough to still give you pretty good results. So we are trying to use this new metric that we have into ranking we are trying to use this query to pay similarity using document vectors where again we are using like sort of a differentiated LSTM model or like a paragraph two vectors model and we are trying to also improve our sort system for there are some pages which are never queried before so we have a lot of list of these pages we try to find out what could be the best way to represent those pages so either using vectors or traditional ngrams approach or something like this. Last but not the least I'll say thank you and I'll finish with this quote which was given by John Rupert Fifth in 1957 where he said you shall know a word by the company it keeps and Mikhail actually developed a model using the same contextual approach of words and it actually helped us give good results. So thank you. Any questions? Yeah. So one of the reasons we had was like we wanted like a unified we so we tried a lot of these key values to ourselves we tried redis we tried like a traditional database we tried elastic search but what we found is like our needs are a bit different in the sense that we sometimes have a vector index where we need like our values should be a list of vectors sometimes it is just strings sometimes they are repeated strings where like you have the same JSON data structure again and again so we can actually optimize it more if we can write those parts of the code ourselves we started by doing that so I mean Kiwi is a much bigger project here and I'm not really the expert in it but what I can say is like it has a lot of features in like you can actually like compress your keys you can do a message pack sort of compression using said level snappy and that gives you like a much cohesive vector it's faster to index it's faster to read and it's scalable in terms that we don't actually have to put this in memory we can actually still have it in disk and do a memory map so you can still have like a lots of data that you can train what we actually wanted in our use case was we wanted reads to be optimized because we don't have writes at all we can compile the index at once and then what we want at run times like use a query and give data from the index for that Kiwi works pretty nicely for us. You were already talking about having no writes on the database. You were already talking about having no writes on the database. I was wondering how you handle having new data, new queries, new data to train your embeddings or embeddings I would say nearest neighbor index because from what I know there are still no implementations of nearest neighbors that can just update the index. Yeah, so it's true. So what we do is like we have a release cycle where we compile each NOI index every month and we also get new queries and new query vectors for this. So it's not like a one-time system but it's true. Say immediately if tomorrow I want to include a set of results which are like new queries for tomorrow, I cannot do that. But to address the same issue we have news. So the news vertical actually handles this. So for the most recent part of anything that is trending right now, you'll have in the news section. So given the concepts, you usually find say Pokemon Go was already available on Wikipedia before its release. So you actually have these concepts which are already learned from Wikipedia later and that's what we use. So you can always learn the concept for the new words like some XYZ, GenX word which comes GenY word that comes up like tomorrow. You probably not have it but it's a very hard problem anyway. Yeah. Anyone else? Okay, let's give a big hand of applause.
Ankit Bahuguna - Query Embeddings: Web Scale Search powered by Deep Learning and Python A web search engine allows a user to type few words of query and it presents list of potential relevant results within fraction of a second. Traditionally, keywords in the user query were fuzzy-matched in realtime with the keywords within different pages of the index and they didn't really focus on understanding meaning of query. Recently, Deep Learning + NLP techniques try to _represent sentences or documents as fixed dimensional vectors in high dimensional space. These special vectors inherit semantics of the document. Query embeddings is an unsupervised deep learning based system, built using Python, Word2Vec, Annoy and Keyvi which recognizes similarity between queries and their vectors for a web scale search engine within Cliqz browser. The goal is to describe how query embeddings contribute to our existing python search stack at scale and latency issues prevailing in real time search system. Also is a preview of separate vector index for queries, utilized by retrieval system at runtime via ANNs to get closest queries to user query, which is one of the many key components of our search stack. Prerequisites: Basic experience in NLP, ML, Deep Learning, Web search and Vector Algebra. Libraries: Annoy.
10.5446/21108 (DOI)
So we can start the next session. Welcome back everyone for our next talk. Anton is going to be talking about scraping the web. So let's help welcome. Thank you Fabio for the introduction. Beyond scraping, that is the main title. And of course what is beyond scraping depends on what site you're coming from. If you look at how scraping was 20 years ago, it was very easy. The way that the web was built up for the user could be easily retrieved in an automatic fashion. But nowadays that's not possible anymore. You have JavaScript to make the experience much more nice for the end user. And if the data is presented for the end user but not necessarily in a specific way to automate the downloading, it can be very hard to get something done. Before I start with the proper talk, I would like to see some hands. Who's used the URL from the standard library? Who's used requests? Maybe use the other hand. Who's used beautiful soup? Preferably the version 4. Who's Selenium? Slightly less, but still good. Who's ZeroMQ? That's interesting. And who's used by a virtual display? Okay, good. Still some people. This is all the exercise you get unless you want to leave early, of course. The talk is not very technical. You will not see any Python code. But these are the buzzwords. If you glue all this together in the proper way with the right idea behind that, you'll be able to scrape current websites. And I would say like 99% of them without too much trouble. Some background for me. People don't know me beyond that I fold the t-shirts at the Python conference. By education, I'm a computational linguist. Unfortunately, I couldn't do anything with Python during that time, because at the time I was writing my thesis, Guido was writing the first Python interpreter. After that, or partly during that, I was doing 3D and 2D computer graphics. And there I actually missed an opportunity in 93 to start using Python. One of the students from the University of Amsterdam who started working for me introduced me to the language, but he already had a C program with two interpreted languages hanging off it, and I didn't want to have a third one in that program. But I liked Python. I actually liked it because of the indentation. A lot of people don't understand that when they first look at Python. I think I know indentation, but I came from using transputers and Occam 2, and they used indentation and folding editors, so that was fine for me. I did some stuff with Python, and in 1998 I finally got an opportunity to do something commercial in Python 1.5.2 on Windows and tickle as the graphical use interface. Some people might know me from C implementation from the order dictionary by Ford and La Rosa, a very complete order dictionary, much more complete than the one in the standard library. I implemented, re-implemented that in C back in 2007, and that was my first experience with making Python packages. More recently, I picked up YAML parser. It seemed to be kind of dead. I made it into a YAML 1.2 compatible parser, the PyYAML parser. And I started that because I found it kind of strange to have a human readable data format that would throw away the commands when you read it in and wrote it back out. So it's a run-tripping parser. It now does all kinds of extra things, and those are available from PyPI as packages. So scraping the web, what is the actual problem? You want to download information from all kinds of websites, but sometimes you want to stay in some state. You want to interact with a website and change the state, not necessarily download the data. You already know what is there, but you want to increase your score somewhere, or you want to make sure that somebody knows that you visited, although you're actually on holiday and lying on the beach and didn't want to start up your browser. So before I want to go into detail, let's briefly look at web pages, so you know what I use for terminology. For me, a web page, coarsely, is a structure of text, a tree structure. The text can have attributes, and the text can have data. So if you look at this small example of an HTML file, the tree structure is shown by the indentation. If you use a debugger within your browser, it often indents that for you to actually see what a structure is. Of course, you don't have to write HTML like that. You can write it all behind each other, which is difficult to look at what a structure is. If you look at the ATAC there, the third from the second from the bottom, it has three attributes, href, id, and class, and it has some data on the other side. Depending on what kind of library you use to go into HTML, you can also say that the other side is data that is associated with body. It sometimes helps to have multiple things together, especially if you have things like italics, you might not just want to have a superior tag, and pick up the data from that tag and it often automatically does away with all the intermediate tags and just puts together the data that you have. So a web page maps some URL to some data, and that's often unique, but it might not be unique. You might get something different for URL. We'll look at that later. Looking at it right now, the old version of changing data is like you use some form data, you submit a form and depending on how you filled out the form, you'll get a different result on the page that you go to, although it's the same URL. What also happens is if you have some state in a cookie that might influence what kind of data you get given a specific URL, and nowadays it's depending a lot on JavaScript, what you actually get in there. You have websites that have only one URL, it never changes, but all the time you get different data depending on your state in the JavaScript that is executed on that single page. Brief interlude. There's different ways of developing software, and I just want to touch on that so you understand why I did the way I did. You can use a complete framework that covers anything that you want to do and learn it, and then implement what you...the little part that you want to do within that framework, using configuration or writing some code depending on the framework. There's some frameworks for doing web backend development, there's also more framework-like tools that you can use for scraping. The other way is going from the bottom using some existing building blocks and gluing them together with your own code. If you develop like I do for some customer who is interested in getting some results, a framework is not necessarily the best way to go. If the framework exactly does what you need to do, and you don't have to change the framework itself, then you might better go with the framework, but if you need to go and dive into the framework and change the 10% of code that you use there, you first have to find the 10% do the changes, and the biggest problem exists in that after running the code for a year and not looking at it, you have completely forgotten about how the framework works, so you have a big problem updating your own coding, understanding your own changes. If you glue blocks together and the blocks essentially do what you want, you only have to look at your own glue, that's the code you wrote yourself in the first place, and after a year you're much more likely to understand what you did a year ago. You might even have to start from scratch, you might do it in the same way. So, I'm going to present something like gluing the building blocks together that I showed you earlier on, or that I had you raise the hands for. Simple websites, those are the ones you can actually access by using URL at 2 and request. Sometimes you want to use form data to get actually to the data that you need, and especially the request help you do that if you can get the data that you want with your URL to and haven't used request, I recommended you actually look at it. And these libraries, they do some basic stuff for you like redirection. Actually doing things like handing over cookies is more complex, and if there's some JavaScript on the side, things really get bad because you have to look at what does the JavaScript do, how can I do that by hand, can I get data that the JavaScripts do with some URL request, and then insert it in the page or directly use it. Cookies are used to keep state, and I specifically mentioned them because they often use to preserve your authentication information. Data that is valuable to get off the web might not be available for free, so it's not like you get some URL and you get to the data, you might have to log in first, and then be able to proceed to getting the data. The authentication, originally there was some building or there is still some building authentication in your web browser. It seldom used, it has a very coarse pop-up window where you put like a username and a password. More often, there's some form you have to fill out on your web page, and that form, the information from that form on the back end creates some cookie and that is used to keep state. Over the last, I'm not sure exactly how much, seven years or so, OpenID has come up which allows you as a web developer to concentrate on getting the information across that you want and not have to write too much of the login code, but it has an advantage if you have your website redirect to Yahoo or to Google in that you, if necessary, have some more physical, or you can physically trace the person who logged in because nowadays Google and Yahoo, if you set up a new account, will ask you for a telephone number where they can send some pin code that you have to type in, for instance in Germany where I'm living now, it's not possible to get a telephone account without showing a passport. So some backtrace that is being done, and so that might be convenience, but it might also be that people want to know that you're a real person and at least have a real, there's some real telephone associated with something, with the person that actually accesses the site. If a site has JavaScript, then URL, lib2 and requests are of little use. I have done, when things came up with JavaScript, I have done these parsing of what JavaScript does by hand, but you have to read the JavaScript, it's often difficult to trace what it actually does. If you have a browser and compare what you get with URL, lib2, what you see in your browser is normally different, of course like you switch off JavaScript in your browser, and that is often a good first indication, like can I easily scrape the website, or do I have to use more advanced tools to get to the data that I want. What JavaScript does, you probably all know, is it can update parts of the HTML tree, and by requesting additional data from the backend. So why do we do that, or why do the web developers do that, is primarily because it's a nicer user experience, and if you don't have to update all of the website, you get quicker updates, which adds to the nicer user experience and reduces the bandwidth that you need. JavaScript has several downsides from a scraping perspective, is that you don't get too easily to the website, and that was too fast. There's also a big problem, is that with JavaScript you essentially don't know when the page is finished. If you do a URL at two requests or to a page, it comes back and you know you have all the data. If you have a page that has JavaScript, you have to wait till it's done processing, but it might never be done processing. It might wait in a loop, or it might have some channel open for additional data to come from the backend, and you never know what it stops. So if you can see something in your browser, you probably can use Selenium to start that browser and then talk to the browser from Python and get your material out. So you just use Selenium like you would using the mouse. You drive the pages and you click on things if that's necessary and fill things out. Selenium originally was used for testing, or I use it originally for testing, and that is easy. Why is it easy? Because if you test something, you made a page, and you just have to see if the page actually is what you expect it to be. You already know the structure, you know what IDs you've used, what classes you've used, you know how to get to the particular elements in the HTML tree. But the advantage is if you use Selenium, that there's never a discrepancy because you're actually using a browser between what you see talking to the Selenium open browser and what a normal user will see, so in principle you can get to anything that a normal user gets. A nice advantage of Selenium is also because the browser is open, if the program is not access it yet because you have a sleep loop or you're waiting for some input, then you can just start the debugger and you can see what the page looks like, the built-in debugger or firebug, whatever works for you. But the big important thing is that the program has to run. As soon as your program stops, Selenium closes down and it closes down to your browser and you will not be able to see what went wrong because if something went wrong and you try to access an element that is there, not there in the HTML tree, you will crash, your program might crash depending of course on how you write it and any useful information that you could get from the browser is gone. You would have to start up a browser externally, go to the page, actually look at what is the structure, what did I expect? Oh, there's a new element there, they changed the back end and try to get these things. So if you use Selenium, you can do a superset of the URL, the Qrequest thing that you can do because of all the JavaScript that is handled correctly. And there is a main, there's two main differences, but one of the differences is that you open a browser and if you use your URL, you don't open a browser, you can use URL or requests easily from a cron job on a headless server without any problems. That is not possible with Selenium without doing some extra stuff. Selenium opens the browser and the browser needs the window, so you need a desktop. So let's look at these, you know, some more of the problems. I already mentioned this, you're never sure where the data is there. The page loads, the JavaScript is started, the JavaScript has to, has some special function to actually wait till the complete page is loaded before it starts executing and you have no clue when it stopped executing. Sometimes you just wait for five seconds because you know in normal situations that things will be there, but much more safe is to check if the particular piece of data that you are interested in actually got loaded, but if you have a table of elements, like there might be three elements already loaded and you don't know how many are going to be there. Is it done loading or not? So the second interlude, we saw that the web page has a structure and there's different ways of getting to a particular piece of data on that web page that you might want to extract. The things you probably want to extract is either data or some attribute value, a URL to a PDF file or to some other page. You can get at that depending on how the web page is built up by using the ID. The ID should be unique, although I've seen several pages, especially generated by Microsoft's CMS systems, that had reused the same ID on the same page. At that point I decided not to use the ID because I don't know if the browser and using beautiful soup will actually, the browser might take the first ID and beautiful soup to second it is like, let's not use that. Depending on how the web page is structured, you can search by class. That is, of course, if something is colored in a specific way and the coloring is done on a specific class, you can get one item, but it's not necessarily like always the case that these classes are not reused or the actual positions in the web page that you're looking at. You can programmatically walk over the trees. On the top I have HTML and then I have the body and then I go down, down, down. That's not particularly fast. And there's something called X-Bat, which if you haven't used it yourself, it's more or less like a regular expression to get to a particular piece of data based on the tag names and some attributes. X-Bat is not very complicated, but if you don't use it on a daily basis, it's kind of hard to remember how to do things. There's a better reusable option that I tend to use and that's the CSS Select. It's about, it's not as powerful, I think, as X-Bat, but it's powerful enough for all of my purposes. It looks like, for instance, this here, it says, get any URL that is an HTTPS URL on some site that comes, but it might be longer. The carrot actually makes sure that it only has to start with that. So the href of an HA element has to have these start with this string. And then the A has to be after a div element that has import as a class. And there's all kinds of rules like this, this kind of thing. I think Selenium might not support this, but this is a beautiful soup, as far as I know it does. And there's CSS, allows you to get to particular elements. At this point, like you have the A element, and you can, as soon as you point to the A element, you can get, if you're interested in that, the full URL that is the href attribute. CSS Select has my preference over X, but because I can also use it when I make a website in using the CSS files to actually determine the look and feel of the site. But like I said, there are restrictions that you have to be aware of. Both Selenium and beautiful soup don't implement CSS selections as complete as your browser does. So what is a typical Selenium session? Before we go into how to do it differently, you open a browser and go to Senior URL. You click the login button. We assume that you have to authenticate. You wait until the redirection to the open ID provider site is reached. You provide your credentials. This is, of course, a whole subject on itself. How do you automatically provide credentials? You don't want to have everybody read your login name and password. There's a few things, but one of the simpler ones is if you make a sub directory in the SSH directory, if you're running Linux that has already the restrictions and our checked restrictions on accessibility only by the owner of the files. Then, if your credentials, you wait until you get back to the request page after the open ID, open ID session has notified your website that everything is okay. Then you fill out some search criteria to restrict the new or look for new data that has happened or has been added since the last time you checked. Then you might get a table or a list of items. You click on one of these references in that table. Then you're finally there. You might be on the final page and get the data from there, you're extracting from the HTML or you find a link. The link might be to some files, some PDF file, or some other file. The main problem with this is that debugging is very time consuming. Every time you log in and you have to wait, and it's like you're not talking about seconds. In the end, your program doesn't exactly know how to analyze the structure of the last page, where you actually retrieve the file information or the textual data. Then you have to restart your program and it has to log in again. We're talking about tens of seconds, if not a minute, before you can get to where you want. If you have a client waiting, it's like, oh, your software is not working anymore. This is kind of bad. So how can we improve on that that you don't have to restart Selenium every time? There's probably several ways, but the way I solve this is going into a client-server architecture, where the server talks with Selenium and my client can just crash, or can be restarted and continuing where I left off. The server keeps the Selenium session open and that keeps the browser open, even if the client crashes. To do that, you need some protocols. You think about how do I set it up? It doesn't have to be very sophisticated. You get data to the server, which is essentially requests. You get data from the server to the client for analysis and knowing what state the program is, what the website is in, so you can take appropriate action or rewrite your client program to take other appropriate action. Originally, when I set this up a couple of years ago, I thought about, oh, I'll write some files with increasing file name numbers, and the server will just look at the directory and I'll get stuff from that. But then I looked at zero MQ and it actually allows you to do these kind of things pretty easily. You have to have a many to one, among other things, allows you to have a many to one connection between many clients and one server. It also allows you to have multiple threads within your client and still have one server open. Using zero MQ, it's very, it's trivial to get the server side on a different machine. You're using port numbers and specify with machine things are running on if they're not on local host. Zero Q, not by default, but it allows sending unicode-based exchanges and it is especially easy to get data. You might not use like special characters in your protocol, but on your website that you download, you're almost certain at some point to find non-Ascii characters and you have to deal with those, so you might as well set up the whole thing using unicode. So if you look at the thing that we did before, the session of getting to some data, if you have a client-server-based solution, then the thing looks slightly different. You open the browser, but only if it's not already opened. You click on the login button, but only if you're not logged in yet. If you're not logged in yet, but you're at the open ID side, you don't have to go to the open ID side, et cetera, et cetera. You don't have to do things that are already done and you just have to pick up where you left off last time and you have to check for those things. So it might just be if the final page with the data has changed, that you don't do any of these initial things, you just check if they're done and then you directly get your data. So you turn around time and start your client program, it goes down from 10 seconds to a minute to part of a second, and then you have your data, much more easy to debug. So debugging comes very fast, I really said that. So if you define a protocol, what do you need? Well, the protocol sends some command with some parameters and gets a result back. So we look at what kind of commands do we need and what kind of parameters do these commands have. There's only very few of them, so please stay with me. You have to be able to open a window and I use a specific window ID for that so I can open multiple windows on the server side. If you don't do that, you essentially have only one window to work with and it's very difficult to do many to one or have multiple clients running because they would be competing for the same window to do something. Using that window ID, you can say go to some URL and the page will show up in the web browser that Selenium has opened in the meantime. The next protocol thing that you need is select some specific item based on an item ID. The item ID you can reuse again on a specific page with a window ID. And then you want to interact with the specific item based on its ID. You might want to click on it to have a radio button clicked or to go to some specific link. Clear input or text area, there might already be something where you want to write. That's the next thing that you want to do. So you might want to clear out the old password that is incorrect and give the new password. And then very important, return some HTML starting with a particular ID. You can of course get the complete HTML page, but it's inefficient. You often already know like, oh, I'm only interested in this table. You selected the table using Selenium and then you get the whole table back. And the other thing that is almost necessary to have is like what is the current URL that I'm looking at? Because if you go to an open ID page and you say click somewhere, you need to know that it actually gets back to your original site to continue working. So you want to check, want to be able to check, have the client check, ask the server, what is the current URL that we're looking at? You can extend this protocol with whatever makes things more efficient. This is essentially where I stopped a year and a half ago after adding a few things. It might be just be more, it might be more efficient to do things on the client's side than push them to the server. So you get the HTML back. You need to do an analysis of that. I used beautiful soup for that. It's faster than going over three in Selenium and trying to get individual items. So of course not useful if you have to actually click on the items, then you still have to do it on the server side. Like I already indicated, it has CSS select support. There's one cave-out though. You get a piece of an HTML page back and beautiful soup wants to have a whole HTML page. Put this in this string between the curly braces and then you can actually hand it over. So the first problem that I solved with the client's server architecture is that your client can crash and you don't have to start from scratch. But the whole thing introduced the problem is that you have to have a desktop where you actually start the browser. And if you want to run something on a headless server or you don't want to at some point in time have a browser start while you're typing in some email, there's a solution using Py Virtual Display. It creates a display where you can use to start the browser. You will actually not see the display. For debugging purposes, you can still get at it if you start a VNC session. So what I normally do is I don't use the VNC backer or the Py Virtual Display backer while I'm developing and then when it starts running, it's fine and if my client crashes anyway, I'll use VNC to connect to the Py Virtual Display startup. And then she's like, oh, the browser stopped because of whatever reason. Sometimes you get like stupid things like your website requires you to change your password every six months and you haven't done that. And of course, like it goes to a completely different page than you expected because you never programmed it for that. There's different ways of extending this. What I already have done is this, which is the advertisements. I use the Firefox browser often in the back end by using some configuration that of course loads pages that use it as much faster. What doesn't work with Selenium but the client's server architecture is capable of is using the Tor network by starting Firefox with its own extensions that you can drive it with. It's slightly more, slightly less powerful than Selenium, but it's for most purposes is good enough. Then about availability of the software, like the previous talk, so there's not yet on PyPy. I need to remove some stuff from the client side that is proprietary for the clients that I developed some software for and you would recognize where I get scraped the software from. So I need to get it out. But once it gets up there, you'll be able to see it on PyPy with using a real mobile browser client and a real mobile browser server. And I will also update the YouTube video with that information when this is available. So that's also almost the end of my talk. I can take some questions now. I can also give some real world examples for what I use it, not for clients. Let's do the questions first. The world examples make the debate. There is a microphone. One, two. Hi. Usually these kind of problems we have when the page is kind of single page application or JavaScript driven and it's usually talking to API, right? No. I'm not using it. Like if there's an API available, you might just want to use the API to get the data. I'm looking at pages that are not designed for, don't have an API to get to the data. Okay. So the main problem is that you need to be sure that page is completely loaded. Yeah. That's what I'm saying. So you might look at some specific element on the page if it's already there or not. If you immediately check you might not have the table at all, then the table gets there but you don't know if all lines have been loaded. So there might be some indication that there's going to be like 15 results and you have a table of 10 items. You know that five results still need to be loaded. And sometimes it's just waiting for hoping that everything arrives in time. Yeah, sure. But this with Selenium looks pretty complex and a lot of stuff are used. Isn't it easier to do something like, I don't know, sleep one second and check, I don't know, content of the page or something? Yeah, yeah. But you still need to use Selenium to get like is the content there or download the whole page. But you also have to request Selenium for that. And Selenium is, if you don't use Selenium but would go back to like using requests, anything that gets loaded by JavaScript, you will not get at all. Because request doesn't handle the JavaScript. Yeah, yeah. So it's, there's different ways of addressing these but it's, yeah. One problem we had when using Selenium to access data was that these pages sometimes have date pickers and other elements that do not allow you to type in data. And these are usually very complicated to automate. Have you had these problems and do you have ideas to handle these? Well, this, actually there's multiple things that you can do. I have seen these problems. So you have, if I recall correctly, like there are Selenium calls like just write in this field. But there's also Selenium calls where you click somewhere and just sending characters. And you have to make sure that your cursor is at the right position and it will get there. I've done that with, for instance, Khan Academy. The website has that kind of problem. So you can get around that. It's not trivial but there's different ways of getting the data. Actually, and I would have to see if the protocol has some option of what of the two to use. I don't recall. Hi. Thank you for your talk. One problem I got when using, oh, sorry. One problem I got when using Selenium to do about the same thing, not really the same way but same tools, is that a lot of people don't want actually the data scrapped. So they're using services like Distil Networks, Cloudflare, that is proxies that will try to detect patterns while scrapping. And when they think you're not human, they will put some capture. Did you encounter this problem? Well, one of the reasons to do the client server architecture is that one of the most frequent things I've seen is that they notice that you log in like seven times a day. It's like, why is that? Why doesn't the cookie persist? And those kind of things, if you know that, this is one of the examples that I have. Let's see Stack Overflow. Because they will actually detect how often you refresh and restrict that. And if you want to advance on the Qs, on the review Qs and get like a thousand reviews and get the gold badge, you have to do special things and load balance where you're actually looking. It depends, of course, on the site. If you have a thief and making a better look, they will look at the patterns that you're using, try to detect it. But essentially, if you behave like a normal, if you have your program behave like a normal person, they can hardly kick you out. And for me, that's for some sites, for clients, that means I do the scraping and it takes two hours. But they only want to have it done once a day and they cannot disallow you to, like, they put up some, say, references to 10 PDF files on a day. Well, they can assume that you need to read the PDF and they don't want you to, like, within five seconds download all the PDF files. But if you download one every two minutes, you still can provide your client at the end of the day with the 10 PDF files that were uploaded. That is the way I handled it. I just have my program behave like it, as if it was a human. And that has to be accessible. Say it again. Yeah. Yeah, that's set up second account. You have to say a second account. Look at it. May I have time for one last question? Yes, I just wanted to add there is some ways to run Selenium headless with without using by virtual display with phantom GS and crumbling. I mean, there is ways to run Selenium headless without using part of being a by utero. Selenium has some modes where you don't like you don't get a browser window. The disadvantage is that it's not using a real browser. So that might be detected. And the other thing is if things go wrong, you have nothing to look at. You have your HTML structure. And the nice thing is if you use by virtual display, you use VNC and you see the browser that you would normally be using. It's like, oh, it's in that state. You don't it's much more recognizable if it now after six months ask you for to change your password. If you see that like you have to change your password in instead of like getting the HTML back and it's like, what is it actually trying to do that? But that is also possible. This is just there are multiple ways of addressing these things, but everything has advantages disadvantages. Okay. Thank you, Anton. Thank you very much.
Anthon van der Neut - Beyond scraping Scraping static websites can be done with `urllib2` from the standard library, or with some slightly more sophisticated packages like `requests`. However as soon as JavaScript comes into play on the website you want to download information from, for things like logging in via openid or constructing the pages content, you almost always have to fall back to driving a real browser. For web sites with variable content this is can be time consuming and cumbersome process. This talk show how a to create a simple, evolving, client server architecture combining zeromq, selenium and beautifulsoup, which allows you to scrape data from sites like Sporcle, StackOverflow and KhanAcademy. Once the page analysis has been implemented regular "downloads" can easily be deployed without cluttering your desktop, your headless server and/or anonymously. The described client server setup allows you to restart your changed analysis program without having to redo all the previous steps of logging in and stepping through instructions to get back to the page where you got "stuck" earlier on. This often decreases the time between entering a possible fix in your HTML analysis code en testing it, down to less than a second from a few tens of seconds in case you have to restart a browser. Using such a setup you have time to focus on writing robust code instead of code that breaks with every little change the sites designers make.
10.5446/21109 (DOI)
introduce you Antonio's Palaro and he's going to talk about how to micro, how to Python power a mobile robot. So please, big hand for Antonio. Okay, thank you. And this talk I will speak about how to build and control a Python powered robot. Begin with something about me. I will be a school students the next year. I'm an enter the first line Linux user and a Python programmer. I live in Italy and I live in Italy and I from Sicily. Okay, this robot is only a little small project because it's too easy to make. This robot, what do we do? This robot to stream the camera. It has a camera, a pi camera. Exactly. It don't hit against the wall. Thanks to ultrasonic sensor. It recognizes and watch humans. So one with the pi camera, where it recognizes a human, it move the pi camera and see the human. So what is the hardware? There is a Raspberry Pi 3. A pi camera for motor DC. Two motor drivers for control the motor DC. Two servo motors. An ultrasonic sensor and a power bank for power. Wait a moment. This is the robot. This is the rescued. This is a little scan of the hardware. At the center we found the Raspberry Pi 3. At the bottom we can find a pi camera and a servo motor. The pi camera send the data for a Raspberry Pi 3. Start to move the servo motor. The ultrasonic sensor, the Raspberry Pi read the ultrasonic sensor, the data and control the motor for don't hit against the wall. A PC controls all motors of the Raspberry Pi and a servo motor. The software is a socket server on the Raspberry Pi. A camera socket server on the Raspberry Pi. Two GUI for control the robot and for receive the frame of the camera. The socket server commands the motors, servo motors and the camera. Directly it controls the servo motors from the camera and the motors from the ultrasonic sensor. The communication protocol used is Wi-Fi. But why Wi-Fi? Why do not I use Bluetooth? Wi-Fi is more speed, it's more range of distance and we can connect more days. So we can send high quality images on low time. We can go to the robot. OpenCV. These are awesome libraries for recognizing the human. It is an acronym of open source computer vision. This library allows to use to manage the image from PC. But how it works? How it recognizes the object, the humans, the animals? First image will be sent from machine learning on OpenCV. Then it will try to find the model and return a predictive model. And it returns the position of the object if it found it. This is a little example of how to work. First we import OpenCV. Then we open the camera for the PC. We connect the camera. Then we load the RcacheCade, the predictive model with the XML file. On the right view we capture the frame. We make a gray scale of the image. We search some face on the frame. This face is returned on an empty array with the position of the face and the width and the height. Now we make a rectangle around it. Then we show the image with the rectangle and all. We found if we have pressed exit and it exit. This is the result. It recognizes the smile, people, the petonians and the drain wings. Because they use a simple algorithm that will try to recognize the similarity from ever this. This is the performance on our Zerryp3. We found a face on an image of 60,040 for 4080. We will be 0 with 52 seconds. On 30,020 for 20,040 image it will be 0,040 for 2080. 0,510,13 seconds. But we can optimize it. We can insert a simple line of code. On this line we specify the minimum size of the face and the maximum size of the face. This is the result of the performance. Instead of 0,520 seconds it will be 0,17 seconds. Instead of 0,15 seconds it will be 0,04 seconds. But we return back. Why I do not tell you that Arduino? Arduino does not support Python. It is more powerful to calculate. And as a Wi-Fi, the price is equal to plus or minus. But it is more small. Because if we take an Arduino, we insert a camera module or shield. It will be bigger than our Zerryp3. So I use a Zerryp3 instead of Arduino. For command robot, I use a socket server with a CIO protocol. This socket server allows a synchronous connection for multiple clients. We can control the hardware. For sender and the frame from Zerryp3 to client, I use another socket server. But the performance is a little problem. The resolution, the big resolution, has a big frame delay. For example, 64,080 image, we can find the video at 4fps. But the grayscale is more speed. If we want a fluid video, we should use a lower resolution with grayscale. Instead, if we want the quality with the colors, we should use a bigger resolution, but it will be slow. Because the color has three channels of color, instead the black and gray has only one color channel. We use a Zerryp3 when we start the robot to make an hotspot. The client can connect to the hotspot. Then we can control it. For Zerryp3, I use a CIO camera socket and a serial interconnection. It is more speed of an USB port. With the CIO server motor, we move the camera to the face. So it could be a very small speed parameter. Okay, this is the robot, a little robot, a little robot. So, at the moment, here we can see the Raspberry Pi. This is the power bank that powered the Raspberry Pi and the motors. And this is the ultrasound sensor. Okay, now it's powered on. So, for control later, use a joystick gamepad. Now, if you have a PC, you can see a new Wi-Fi hotspot called the robot. Okay, now I'm connected to the robot. Okay. We can be more the robot, the servo motors with the joystick. We can rotate the robot, and if we want... Wait a moment. We can control the robot with a simple joystick. We can go back. We can go back, forward, and rotate on itself. We can enable the camera. Now, I can see the output of the camera. Okay. Okay. Okay. This is the streaming. At the moment, it is at the minimum quality. Here, it is in white and blank. It is more speed but white and blank. We can set the more beautiful quality but it will be slower. And the width of the collar is more slow. Okay. Okay. Thank you. Anybody has questions?
Antonio Spadaro - Build and control a Python-powered robot. During this talk you will see how to make a robot able to recognize people with a Raspberry Pi as main board and Python as language. The talk will cover the hardware and modules, discuss briefly the alternatives, and finally show a live demo. ----- The robot uses two main modules: - **OpenCV** (_Open Source Computer Vision Library_), an open-source library that includes several hundreds of computer vision algorithms. Usage ranges from interactive art, to mines inspection, stitching maps on the web or through advanced robotics. - **gpiozero**, a simple interface to everyday GPIO components used with Raspberry Pi. The first is used to recognize the people and the object; the second to control the robot.
10.5446/21110 (DOI)
Before lunch break the next session is about CFFI and we don't need presentations for the speaker. Army has been a long time well known member of the pilot community working on PIE-PIE and CFFI and other stuff. Welcome, and thank you. So today I'm going to present mostly CFFI and I'm going to talk a little bit about PIE-PIE as well because we need to have one PIE-PIE talk at every Euro-Python and we haven't any this year so well. So first CFFI, what is CFFI? First CFFI is a project that we created about 2012 and it is actually a very successful project according to download statistics of PIE-PIE. You can see numbers like it's 3.4 million downloads every month nowadays and it's actually it has beaten Django, cool. And I mean the main reason why it is so much successful is that there are a few very successful projects like cryptography that have switched to it. So it means that every time you do PIE-PIE install cryptography you also actually install CFFI as dependency. PIE-PIE is probably a successful project, it's harder to say for sure and I will talk more later. So let's start with CFFI. CFFI is how do you call C code from Python, right? Because obviously you have C code, like everybody has C code. Most libraries out there are actually C stuff. And if you want to call one of them, then you need something. So CFFI is just one more thing, one more solution to call C code from Python. And the name CFFI comes, well, it's boring, just means C for function interface. It shares ideas from a lot of projects actually. The original motivation comes from LuaJit, LuaJit's own FFI module is similar. But then we took a lot of ideas from other projects like Siphons, C types, Sweg and so on. So here is a demo. Let's say you want to call this essential function from any POSIX system, get PW9. What do you do? Well, you first do man, get PW9, you see a man page. The man page contains this. Like it tells you, okay, you need to include this and that. And then you get this function, get value num that takes a char star argument, written in your strict password star and a bit later in the main page, the strict password should have roughly these fields like PWNAM, PWPASSWORD, PWUID. UID is the type, UIDT. It is all fine if you're programming in C and this is all a mess if you're programming in Python. So what do you do? You write this code in Python script. You make a CFFI builder, you do CFFI builder.cdef, triple quote, triple quote, and here is a big string. This big string you copy and paste parts of the man page. Like I'm going to say type def int UIDT, except I'm not exactly sure it's an int, right? It could be long, short, whatever, because it's C. So I'm going to say int dot dot dot, which means in CFFI, it means some kind of int, but I'm not sure exactly which kind of int. And then you do same with the strict password. You say it's a structure that has field PWUIDT. I know that's a UIDT, but then the strict passwords contain tons of more stuff and I don't know what they are. They really depend on the platform that you're running on and so on and so forth. So you just say colon colon dot dot colon for what it just means and other fields here. Right? And the dot dot dot are really meant as the source code. It's not meant as this demo glosses over details, right? And then you copy paste the line for get the PWU name. That's easy. Okay. And then the man page also had something about include. So we paste them here, some other declaration. And in this FFI builder dot set source, we also say give a name, PWUIDCFFI, that's the name of something that we want to create. Okay? So you put these two slides into one file, one Python file, you run it. You run it. And then you copy get PWUIDCFFI dot SO. And now this PWUIDCFFI dot SO is a standard C Python executable extension module. So once you got it, in your main program, you import lib from this module and then lib is something that has an attribute, well, a function, built-in function called get PWNN. And you call it. And when you call it, you're going to get a struct password. So you can read the field, PWUID, print it. And this works. So in this simple way, we have made an interface to call this C function from Python. And that's it. Okay. So what I'm going to talk about now is, yes, it's not completely as simple as that in all cases. So I'm going to have some more examples about more complications. Like the first one is that actually in this built-in module, you get two objects. There is lib, there is also an object called ffi, and this ffi contains general helpers that you may need to call at some point. So the general helpers, yes, sorry, this is also, this is also other things that you can do in the C def. You can declare your function. You can also have types that are completely opaque, like dot, dot, dot. This is, for example, what you get if you have a C library that has an interface, like make a window and returns your window star, but you don't need to know or care what is the type window. And then hide window, destroy window, all these C functions. Okay. So now you have the ffi object. Now in the ffi object, you get a few helpers. For example, if you really want to make a C structure, like here I want to make a structure that is of type char underscore. So char brackets. That means, like, if you know C, you know exactly what char bracket is, right? So this is generally the approach of C ffi. You need to know a little bit of C, but then if you do know a little bit of C, then it's easy because it's the same. So who you can, with ffi.new, you are creating an object of type char bracket and you're initializing it from a string. So you get in P some C data of type char bracket and it owns 12 bytes. And if you count, actually, it's the number of characters plus one because there is a terminating null character as traditionally in C. And you can, well, you can index it. You can read or write two individual items. You get also another kind of C data. For example, by the code that we did before, lib.getpwnam. Well, first, first we did it before in the example by giving directly a string. But you can also give it a char bracket, which means an array of characters, like P in this example. And, well, in any case, you get as a result Q, which is actually another C data of type char bracket, and it leaves that with address in memory and then you can index it. Sorry, you can get its attributes. So those attributes are just the field names of this. So from this, from such a Q, you can also cast it to void star or to anything else. This is a C rule of cast, you can cast it to another pointer type. You can cast it to an integer type. Like if you want to cast, if you have a pointer and you want to really get the number that represents, that represents this pointer address, then you cast it to an integer type and, well, I mean, I could have written ffile.cast long or int, but instead I'm using the type int ptrt, which is an official C type, that means an integer that is large enough to contain a pointer. But it's just the same, and I'm getting a number that is an integer valued pointer. So these are the kind of things that are in the ffile object. You also have ffile.string. So this is an example where in my structure I have PWU ID, okay, that's 500, but I also have PWU name, and reading it returns a char star. And then from this char star, you can convert it back to a Python string if you want. So this is what ffile.string is for and so on. An example if you're doing something a little bit more complex and you really want, you have a Python object like this x, this x in this example, I want to have this Python object casted to a void star that the C code will just carry around, and then at some point later the C code will give us back the void star, and from that void star we want to go back to the Python object. I mean, this kind, this is standard for example in all callback systems, like if you register a callback for a C library, typically you give it the function to callback, and you also give it some kind of void star argument that the C library will just store and it will give it back to your own callback. So in order to do that, you would use ffile.newHandle, cast any Python object to a void star, then you save away, fish it again, you get a void star that happens to contain the same value as a void star, and then from this value you can go back to the original x object using ffile.fromHandle. So this is just one example of more advanced things. Well, cffi as a whole supports more or less the full C language, which is actually not so huge. I mean, it supports the full C language, I mean, of course, not the full declarations of C, like what you can, what types you can declare, what, how you can call functions, various calling conventions, and so on and so forth, like on Windows Cdev, on Windows Cdecl versus STDdev, no, STD call, sorry. This is supported by Cffi. Okay, so it's more than this short introduction suggests, of course. Like if you really want, if you have some larger library, the C library that you want to interface with, a typical example is that such a library, well, you don't want to expose directly this library, but instead you want to expose some kind of Python, some kind of Pythonic wrapping of the library. So what you do is you write your Python wrapper that itself use Cffi, but you use it internally, like you write your classes on nice functions in Python, and inside internally you would use this C data object, but you would not actually expose them to the rest of the users of this wrapper that you're writing. So this is typical, so basically instead of, well, instead of writing, for example, Cpython C extension module, where you would write in C bits everything, like you write your Cpython native types and so on and so forth, and then you get only the C extension C that people import and use directly. But here with Cffi's idea is more that what people import and use directly will be the Python wrapper that itself use Cffi. Yes, well, there are actually a few other use cases that I did not really speak about. Now you can use Cffi in the mode that is called ABI as opposed to API in which it's a mode where you don't have any C compiler involved at all, and then, well, you get more like C types as in you have to declare exactly your structures and your functions and you're not allowed to make a mistake and you're not allowed to use the dot, the colon, colon, no, the dot, the dot syntax hash out. Well, there is also support for embedding instead of extending, which is the case where you have your big program that is written, not in Python at all, but it just wants to import and use Python for embedding. So for this case, there is a mode of Cffi in which you can write, so you write Python code, you declare with Cdef, the thing you declare with Cdef becomes the interface that is callable from the C code, and then the rest of the program calls this interface and calls into your Python code directly. Okay, well, C is a dot, basically. Yes. So let's talk about PyPy for about three minutes. PyPy is a Python interpreter. It's different from the standard which is Cpython. The main goal of PyPy is speed, when run PyPy, you get an interpreter, looks like Cpython, there are four instead of three greater than signs in the front. But was it just the same, basically? You replace Python, my program, with PyPy, my program, with Py. It's cool, et cetera, please use it. The main difference is that, for example, it's implemented in very different kind of garbage collections on Cbuffins, moving generational incremental garbage collector. If you don't know what this technical terms mean, it's fine. What they mean mostly is that because it's moving garbage collectors, we have trouble implementing the Cpython C API interface. So it's hard for PyPy to import a Cpython C extension. It's possible because of, well, because we did tons of facts, basically, and it's possible and it's slow, et cetera. So it kind of works, I would say it works better and better, as in we can mostly do it for NumPy, for example, nowadays, mostly, so soon announcements, et cetera. But yes. Well, PyPy is great right now if you use Python and don't rely on a lot of the extension module, for example, everything. A lot of examples of web services are like this, like in POD Django or stuff, whatever, huge libraries. But that's typically written all in Python, so it works very nicely on PyPy. Okay. Yep. Well, the Cpy is large and it's a mess to implement in PyPy. Well, I would argue, actually, that this C API of Cpython was actually part of the success of PyPy, of Python, sorry, the historical success of Python. Why Python worked or started to be really useful like 10 years ago or 15 years ago? It is also because it has this C API and people actually use it to actually build interesting things on top of it. But, well, on that, you have all these binding generators that have been built on top of it, so you don't need, well, you can write C extensions manually, but you can also use these other tools that would generate C extension for you. And CFFI is just one more such tool. Well, the different, the CFFI is a bit different, I would say, because the goal is really to not expose any part of the Cpython C API. As in, yes, you can write C code with CFFI, but the C code that you write should not use any PyObject star or any PyInt from long or any of this function from Cpython. So it means also that it is possible to port this whole CFFI module to other interpreters than C Python, and that's what we did. So that's one of the motivations for CFFI in the first place, is that it is possible to write a PyPy version of CFFI. And indeed, we did, and the example, the demo I showed in the start of this talk, well, it works just exactly the same on top of C Python or on top of PyPy. Well, it is actually faster on top of PyPy because PyPy's JIT compiler knows a little bit about CFFI and is able to compile, to read, produce machine code that will directly call the C function, for example. So it's extremely fast, basically, on top of PyPy. But it does not mean that it's extremely slow on top of C Python. On top of C Python, the performance is acceptable as well. So, yes, it works on C Python on PyPy. It would be easy to port to other Python implementations. It has not been done so far, as far as I can tell, like JSON or Rn Python. So yes, the main benefit is that it is independent on, it no longer depends on the C Python, C API. So yes, use CFFI, it's easy and cool, and it is supported by non-C Python implementations, is conclusion of my talk. Thank you, Armin. Are there any questions? First. I have been, well, working with CFFI like a few years ago. I also did, well, mainly on Node.June, which is a C-based IoT framework. And I encountered, like, it was really hard to create when you have complex projects to create all the headers, so that, well, pre-compile all the headers to feed them to CFFI to create the library. So I actually, it wasn't documented at the time, I don't know if it's now, but you can actually run the, use a compiler to pre-compile your header, so that it includes everything. This has improved, yes. Now it's a cleanly separated two-steps process. Like you really write a separate Python script that declares what you want, then you run it once, you get your extension module, and then you use it from your main program. So it's better than it used to be, yes. Thanks for the talk. That looks really cool. I have a question about PyPy, actually, that I'll ask you because there is no separate PyPy talk, it seems. What is the status of Python 3 work there? Like, it would be nice to get to 3.5, yeah. Is it anywhere near? I suppose if I were to give an estimate of time, like, I cannot, obviously, but imagine that I could give an estimate of time that I would say that next year should be nicely progressed towards PyPy 3.5, yes. And what kind of help do you need, money, people? Yes, well, we need people on time. Get money. Or I'll forget money. Hi, thank you. Another question about PyPy. There is some kind of tool to embed PyPy like PyInstaller or PyTweaks, something like this to embed and distribute binaries of this. I don't know. Is that answer? Any more questions? I'm the wrong person to ask as opposed to that, yes. I'm sorry. Armin, thanks for the talk, thanks for PyPy, thanks for CFFI, it's amazing, I use it quite often. And I was wondering when you have the declarations with the ellipses, like, dot, dot, dot, don't care, you figure it out, okay? Can you, like, in very simple terms explain how it goes out and finds out? Because it always works, okay? So it's very good. Like this, for example, UIDT is some kind of integer, but we don't know which at all. So the magic is to write one piece of C code that will work just by compiling it with normal C compiler. So also, well, every single one of this dot, dot, dot is different kind of magic like, for example, the type def int UIDT. This probably contains, I'll have to talk again. Yes, yes, yes, I mean, it must be something like, like, like, you write one big C expression that says, size of UIDT equal equal one, question mark, then I'm going to use this else, size of equal two, then I'm going to use that, et cetera, et cetera. And then you do an extra round of magic to know if it's signed or unsigned. Like, yeah, I mean, for sign versus unsigned, it's something like, you take minus one, you cast it to UIDT and you ask it, is it positive now? So we're at the end of the normal sessions time from 30 minutes, but food won't be there until quarter two. So people want to ask more questions and sit around, but just want to let you know that if you have to do something in the half past, you have to do it now. Thanks for your talk. I have a question about defines. We have a project with a lot of defines that are constructed dynamically during compilation from a lot of nested macros. And it's possible to use them by name in, I mean, in Python code because actually I don't know they're writing. So in defines like constants? Yeah, for example, we have a driver that use some IOR operations. And these commands are constructed from Linux macroses. But actually I don't know what they're doing. Some shifts, some source, some ors. Can I use them by name? Yes. I mean, use dot dot dot. Basically, you say here in C ref, hash define, name, space dot dot dot. That means it's some integer. I don't know which one. Figure it out. Any more questions? If not, thank you, Armin, and see you next year.
Armin Rigo - CFFI: calling C from Python In this talk, we will see an intro to CFFI, an alternative to using the standard C API to extend Python. CFFI works on CPython and on PyPy. It is a possible solution to a problem that hits notably PyPy --- the CPython C API. The CPython C API was great and contributed to the present-day success of Python, together with tools built on top of it like Cython and SWIG. I will argue that it may be time to look beyond it, and present CFFI as such an example. ----- I will introduce CFFI, a way to call C libraries from Python. CFFI was designed in 2012 to get away from Python's C extension modules, which require hand-written CPython-specific C code. CFFI is arguably simpler to use: you call C from Python directly, instead of going through an intermediate layer. It is not tied to CPython's internals, and works natively on two different Python implementations: CPython and PyPy. It could be ported to more implementations. It is also a big success, according to the download statistics. Some high-visibility projects like Cryptography have switched to it. Part of the motivation for developing CFFI is that it is a minimal layer that allows direct access to C from Python, with no fixed intermediate C API. It shares ideas from Cython, ctypes, and LuaJIT's ffi, but the non-dependence on any fixed C API is a central point. It is a possible solution to a problem that hits notably PyPy --- the CPython C API. The CPython C API was great and, we can argue, it contributed a lot to the present-day success of Python, together with tools built on top of it like Cython and SWIG. However, it may be time to look beyond it. This talk will thus present CFFI as such an example. This independence is what lets CFFI work equally well on CPython and on PyPy (and be very fast on the latter thanks to the JIT compiler).
10.5446/21112 (DOI)
I'm Maria, I'm from the SoneTall, I'm a big amazing library and she is one of those unsubstantiated networks. Good afternoon. Hi everybody, so I hope you're not here only for the comfortable chairs and wait for the lightning talks. In this meantime I will talk to you something about networks and bouquet and how you can plot networks even if there's no support in there for directly plot networks. About me, I'm a junior software engineer at Blue Yonder. I do not use this at all work, it's just a side project so at the moment it's not used in our company. So yeah, that's it. So I hope most of you maybe heard the talk of Fabio yesterday. So do you hear it? Yeah. So you know, most people know what bouquet is, it's a great visualizing library. And yeah, I will show you basics, how you can handle data, how you can manipulate it, that you can go back and change something or get effects. So why did I do this? So during my master thesis I was working with networks, some kind of social networks. We wanted to explore them and the problem was we wanted to seize them. We wanted more than just tables or some columns to read them about them. So we wanted to visualize them and we wanted to see some properties. So we wanted to see it in the browser so maybe we wanted to include it into an app. And I came up with this. I generated the networks and the properties and I stored them in a database. Okay, I wanted to visualize it with D3. It's a nice, swishy knife or what it's called, but it's extremely complicated, but it's powerful. So I had to provide the database and I created a RESTful Flask app. So it's a lot of overhead and a lot of programming just to do some visualization to explore. So the question was can we do this better? Of course we did, or we can. So a friend of mine reminded me so there's this tool. Okay, you can look at it and I was thinking, okay, I will try the same now with the library. And it's much easier. So I did not have to handle any JavaScript code at the moment. I do not have to care about how do I get the data to the client running in my browser. Okay, it's doing this for me. I also can explore, start my visualization app in a notebook, in a Jupyter notebook. And this is really great. And on top of this, I can change a network. I can change my graph, I can manipulate it, and I can effect specs. So if I select something, I can get this back. So I will now show you how it's done. So I will create a network, I will show it to you and all the code you need for it is part of this slide. I did not let any code out. At the end, there's a more complex example there maybe, but here you see all what is necessary. So I need some example data. So I was thinking about using some, usually this example data like Lemyserable or something like this. But yesterday I had the idea. So we had the EuroPython and people like to use Twitter. There's some nice Twitter modules like TPP. So I used it at the information from the user EuroPython. And now the user EuroPython sometimes sees link to a lot of people or an author uses EuroPython and links to another people. So it can create, use this data and create some kind of social network. So authors are connected to each other, maybe they treat it more so the weight on an edge might be higher. So this will be useful for a network. So I have my data now. What is the next step? Yes, you need a network. So as I said, sadly at the moment, Bokit doesn't support it out of house, but we can do it our own. So we use network X and we load our EuroPython data. I could have done it live here, but I was a little bit afraid of the Twitter limit. So I did not do it. So I started in a Gmail file. So I created the network using network X and it has also function to write it to a Gmail file. So I now import this file back and I get my network. What I do now, network X can draw, but it usually draws with my putlib and it's static. So I can use the layout from network X to create a layout. And I can use this layout to fill in Bokit and there I can get an interactive visualization. So I put in my network, I put some values in Bokit, case just says how much distance would you have between some nodes and it's an iterative algorithm, so I can say some number of iterations. If you're a little bit more interested, what it exactly does, you can go on the first Wikipedia page, force directed graph drawing. So what it basically does, it creates spring forces between nodes and then you have a 3D model and put it on a table and then it tries a few iterations to get rid of the friction and then you have your nodes on some positions. So this is basically a spring layout or a force directed graph drawing. I will use this layout now or later. Now we have to do some work around, not work around, we have to get the data in a format we can use in Bokit. And the cornerstone in Bokit is usually, I would say, it's the column data source. It's one kind of, I think, three or four data sources, but I think it's the most important. It's the one you probably would see first. So it's a class where you can store data column based. So you see on the left there's an ID, of course, because they're usually all lists here. And I store there the X coordination, the Y coordination and the node name. So the first row says, the Björni, it's my Twitter handle, is located at the position 213. And the nice thing about this column data source is you can change it. You can add data, you can add columns and you can change it. And you will also get effects back. So if someone selected a node in your graph, this is the point where you get information about which node is selected. So you can use a lot of lists. You can tuples, you can use pandas data frames to create those lists. But at Netflix, you usually have a dictionary first. And so we have to do a little bit of transforming the data. And this is a drawback at the moment. So you have to copy the data. So I get the layout. I have the items. So the key in the layout is usually the node name. And after the node name, the value is a tuple of the coordinate of the node. So we have to extract those values and we have to put them in lists so that we can create our column data source. So we just extract them, they use it, they use them in the dictionary and put it back in the column data source. So now we have our node source. Now we can finally plot something. How is this done? Here's a little bit of code. You can ignore first the hover code, but just look at the figure plot. The figure just creates your drawing area. So you define how big it is and you say something else. You say which tools you want. So tab means you can click on nodes. Hoover is now the HooverTool from above. So that you move your mouse above a node, it will show the name because I know that I have the column name in my data source. And also I have the ID or the index. This is a property which is always there. And then you hover over it and you will see the ID and the name of a node. The next step is I want to see my circles and this is done by plot circle. It will generate or it will create a renderer. It's the R cycle, it's a cycle renderer. And now I put my data source in here. So you say source is my node source. And now I want to have X and Y. So here's X and Y. So I say the first is the column name X and the column name Y. And they will be used for the positioning of the circles. And I have some fixed values for color blue and for the level. And the level overlay just means it's above the lines later. And it's 10 size. So we have now this. It's a really a network, it's just points. Okay. So we need some more work. It's not so much, but we have to add some edges. And to add the edges, we have to prepare the edges again. So we just take the layout and the network. And we extract the positioning of the nodes again because we want to connect nodes. And what we do here is I get the data off of the edges. So if I say network edges and data is true, I will get the edges and the weight, which is the data attribute for every edge. And now I calculate some maximum weight because I want to do some alpha coloring of the lines. And so I can calculate a value between 0.1 and 0.6. And I put all of this in lists of a dictionary. And those lists I can put back into a column data source for the edges. And now I get a line source. Yes. Now I can plot multi lines. And I do the same circles. I put in the source and say the source is the line sources. And I say for the first point of every line, so now you have tuples in those first two lists. So line is defined by x, y for starting point, x, y for the end point. So x, s is usually the starting point, y, s is the end point. And this is just a name for the columns. And here you see already that we use for alpha the name. It's alpha. So the alpha will be used from the column data source. And okay, you cannot see it directly here, but usually the lines have different coloring of alphas. We'll maybe see it later a little bit better. Okay. This was just a boring network. We want to see a little bit more. We want to see some properties like, I said, centrality or maybe clustering. So we add those information to our column data source. And it's not so complicated. So network is provide some really cool algorithms. So you can use, for example, multiple centrality algorithms. I have chosen here the betweener centrality. It just means a node where, so you have shortest parts in your network and a node where a lot of shortest parts have to cross through has a high betweener centrality. And now I have a centrality. Again, it's a dictionary. We have to transform it a little bit. We can use it and put in the values as a shifted a mapping to a range. So I want to use this value for the size of the circles. So I say, okay, the least important are has a size seven and the most important have maybe 17. It's just a range mapping. And I say, okay, the new column for my column data says is centrality. And I added to my node source. So my node source has now for every node as a centrality value. Okay. The choice list. Okay. So the next point is I wanted to have some clustering. So which nodes people are maybe a little bit connected because they have been treated about each other. So I use this Python-Movain module. It's in addition to network X and it creates clustering for you. So it's clustering is NP hard. So you will not get all of the same result. And it's maybe quite a calculation needs some time to calculate it. But for this size, it's still great. So even much bigger sizes will work. So I would get a partition. And now again, I split up the partition, get out the nodes communities here. And the first you have again nodes. I don't need them. We have the communities. And now I can again add some attribute or add a new column to our column data source. It's community. And now I have communities in my data source. Now I map. I just do a coloring mapping because I want to have different colors. I have a list of colors and it used the module module operator to just give every group a color. And now I can see another plot. So I missed something. So you have just the added new column, but you're not using it. The problem is the renderer said we have a cycle surrender has still a fixed size and fixed color. So I just changed them in my column data source. And I say now use centrality and now use community color for the color. And now can plot it. And now you can see different colors, different sizes. And there's a big dot in the middle. It's not. It's your place. So yes, I let it in there because I want to show you now I want to interactively remove it because I don't want to have a social network about people plotting and twittering about your poison. If there's your poison in it, it's not does not make makes much sense. So we have to change it and we want to do it interactively. So I want to see I want to click on a note here. This is a little buggy because it's a slide show. And usually it works also in notebooks. You can go above. I can show it here. You can go here, click on something and you mark it. And then you can can you move. I want to remove it because I say, okay, it's a bad data set. I want to remove it and I want to do some recalculation. So what I can do, I can do interactions and I can get out of column data source which notes I selected. It's a bit tricky data structure in here. So you have one zero D one D and two D. Zero D is just for lines and patch glues or other glues like circles on the one D key. And two D are maybe some multi line drawings like octets or something like this. So we just go there use the one D key and we have the indices of all marked notes currently in our plot. And what we can do now, we can remove it. This is just an example code. You can do it better, I think. So I get the index and I use the index to get the node from my network. And now I can remove the network, I can remove the node from my network. I will pop it out of my layout. But I have to recalculate or restructure my data in my column data source because currently they are not sharing the data. So I iterate over all of the rows over all columns and I remove the index. So you could also remove multiple of them. And again, then you update the data, adjust the new data for every column and you add the dictionary for the updated edges. And then you can remove an edge, can remove a node. But there's a problem. Okay, it's great, but it still has some problems. Not everything is working in a notebook. And as you see, I'm still in a notebook. It's just a slideshow, but it's a notebook. You cannot redraw data sources or cannot redraw automatically if you change a column data source. You can push your changes there or you can create a push and it will redraw it. Or if you run it in a bookcase server, it will automatically redraw it because usually it will iterate and loop over it and will check for changes or you mark it as trick and changed. And another problem is you cannot get those values. So what I showed you here is not working currently in a notebook. The list will always be empty. So you have to do this in bookcase server. Okay, it's still great. I can use a bookcase server to run my app and it's not much of a problem. So you can, another floor back, I have to say yes, if you want to add widgets. So your notebook can add widgets like sliders, buttons, stuff like this. They will run this JavaScript code, but you can translate it or you can run it in a notebook. They will still stay with pure Python function and pure Python callback functions. Good. Now I want to show you that you can do those interactions. So as I said, this is the Europe Python account of Twitter and I want to remove it. So I marked it. I can remove it and you see now it's gone and we have, there's some other connections. You see some strong lines. Those are connections between others. You might be Twittered more about each other than others. And I can switch back. So you see a problem. It's still, there's no, no central person in there because we removed the very central person or a person. I still have to update properties. I push the button. I call update function. I go back to my network. It does some calculations. I will get the information. Put it back in my column data source and I see now more interesting people who might be interested in the network. Interesting to you because they are Twittering a lot here. I think it's the open stack account. They have connections to other people. And yes, but we still have the old layout. So we can update the layout. Takes a while and now we get this layout. Looks a little bit weird at the moment because for the network for the Europe Python is a little bit, I would say we have a lot of people who just Twittered about each other and then we have one to one connections. And you also have still like here nodes, they don't have any connection because we removed Europe Python, but we did not remove nodes who have no other node attached. We can fix this. So we can remove it now. So you see this is one, this is one is gone and I can reset the zoom back and I'm back here and I can update the layout again. And now I can explore. So we can dig a little bit deeper in there. So I'm looking out here. Here's my colleague. He's sitting there. And I think, yeah, he treated the most of our people or our colleagues. And I can zoom in here and can see where, which people he's tweeting about. So here's another colleague. And cool. But I cannot now explore also what happens if you get the meltdown and decides to go to Java or something like this. I can do it again and then I can again update properties and stuff like this. So you see, I did mostly interactive network plotting in just a few minutes. And I think it's quite handy if you just want to explore. You can go further and do some more stuff. And of course you can just switch network X. It's a great library where you can switch it for other iterations. If maybe you want to use NumPy or stuff like this, you want to do some heat development and you want to plot it. Just think about it. You can do it. You can. It's not so complicated to bring it to Boqui and interactively change maybe what you're doing and bring in some values you wanted to change. And I think that's it. I hope you have enjoyed it and maybe learned something. If you want to get the documents, the notebook, the Twitter data and how I get the data, you can go to our company, Blue Yonder Documents. They are the presentations for this year and the last year. Here's the links for the network X and Boqui. That's it. Thank you. Come on. Thanks. Is it working with all the layouts which are possible in network X with Boqui or not? Can you customize the layouts more? There's five or 10 different network layouts. So networks have some layouts. They are for random circle layout, but they are not so sophisticated. If I have a specialized one, my own stuff, can I use it through this as well so it will work? Have you tried it? No, I did not try it. But if you generate a network where you just generate positions, it should not be a problem. For example, if you want to have a spring layout where you can move clusters nearer together, I think you can just, you have to copy it, you have to fork it maybe, network X, and then you can bring in some additional forces to draw others more together. It should not be such a problem. It's like in Pipelat, right? You first draw the nodes and the edges and then I can put it into this one as well. Okay, thanks. I just wanted to understand a bit better the connection between bokeh and network X. So once you've done the initial graph with bokeh, when you do some more things live, does it go back to network X again or not? Pardon? When you do things at this point. Yeah, I go back to network X. Okay. So if I go, I want to see a different centrality here. Closness centrality, it goes back to network X and calculates it. It's not pre-calculated. It's just Python callback functions. They go back to network X call algorithms, remove on network X a node, and then you have to transform it back and then you can use it. Okay, thanks. Thanks for the talk, by the way. The buttons I see here. Is this from bokeh or have you added this yourself? Okay, this is something you have. It's not in the slide. So it's basically buttons from bokeh, two lines. You say I want to have a button. I want to have, and then you add an update function to a button. You bring it in a layout and, okay, it's three lines. Two lines and maybe another. And then you have the both buttons and they do something, what you want. Any more questions? No? Give a big applause for Brough. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Björn Meier - NetworkX Visualization Powered by Bokeh Visual data exploration, e.g. of social networks, can be ugly manual work. The talk will be an introduction for the combined usage of NetworkX and Bokeh in a Jupyter Notebook to show how easy interactive network visualization can be. ----- During some work with social network analysis my favoured tool to study the networks was NetworkX. It provides a wide set of features and algorithms for network analysis, all in Python. But the functionality to visualize networks is not very strong and not to mention the missing interactive manipulation. However during the exploration of data: exporting, feeding an extra tool for visualization and then manipulating data manually was a tedious workflow. As I also had the optional target of presenting networks in a browser, I improved this workflow by creating a Flask web application providing interfaces to my networks. On the browser side I created a javascript client based on D3.js. In retrospective the required programming effort in Python and also in Javascript was too much for such a task. And exactly this target, interactive visualization in a browser (and as bonus in a Jupyter Notebook), can be achieved quiet easy now with Bokeh. The talk will be a step by step introduction, starting with the basic visualization of a network using Bokeh, NetworkX and a Jupyter Notebook. Next, how to create interactions with your network which will be used to change a network structure, e.g. a leaving person. As we want to see directly the impact of these changes in a network I will finally show how to update networks and visualize directly how the importance of the remaining people changes. And all this can be achieved with Python and maybe a bit of Javascript.
10.5446/21116 (DOI)
Good afternoon. I want you all to welcome Catherine showing us how to do computer art with Python. Hi, thank you for coming. My name is Catherine and for five years I was a graduate student in physics at the University of Waterloo. Now the University of Waterloo in their Faculty of Science offers a computational science degree. It's a cross departmental degree program that tries to teach people without very much programming experience but a lot of enthusiasm for science how to use programming to be more efficient at science. And during my time as a grad student I was the teaching assistant for the computational physics classes which used Python but very, very basic Python to teach physics concepts such as classical mechanics, integrating differential equations, stuff like that. And today I'm going to tell you about a Python library that I have been writing that was inspired by my time as a teaching assistant. So every great software development project begins with a great user story. So for this presentation let's imagine that our user is the freshman meme guy. So the freshman meme guy is a computational science student. He has no prior programming experience to the computational physics class that he took and he's also a big fan of Quidditch. And the midterm for computational physics came on the day after the intramural Quidditch finals so he flunked his midterm and now he's looking for some extra credit. And his university professor says, okay, the midterm mark will be ignored if you can make me a nice pretty animated visualization of your Keplerian planet simulation. So the freshman meme guy goes off to Google and he looks for 3D drawing options in Python and the first thing that he finds is processing in Python mode. So has anyone used processing in this audience? It's pretty awesome in my humble opinion. It was created by Ben Fry and Casey Reese in 2001 while they were part of the MIT Media Lab. Processing is the name of the project, the IDE and the programming language and it's targeted at artists and beginners. And in recent years it has a Python programming mode. So the freshman meme guy looks up the sample code to draw a red spinning cube and he finds this and it's all human readable. He doesn't quite understand variable scoping so the global word is kind of magic to him but otherwise it makes sense. There's one method called setup that executes once when the program starts and then there's one method called draw that executes every single time the frame or the window is drawn to. And it produces something sensible that looks like this. So the freshman meme guy is happy, he goes off with processing and implements his planets and goes to try to pull out his simulation and put it in processing. But the first thing he encounters when he tries to import sci-pi is this import error, no module, named sci-pi. And now this makes our freshman meme guy very upset because he is absolutely sure that he has sci-pi installed on his computer and in his Python environment. I mean, he's been using it all semester for his classes. So he encounters this problem because the Python mode in processing is kind of a veneer over a Java underbelly. So he gives up on that approach and goes back to Google. The next thing he finds is pyopengl. So pyopengl is one of the Python bindings to opengl. It's like how pyqt is to qt. So he looks up the sample draw spinning cube example in pyopengl. So the imports look friendly. He just has to import the graphics library and then the graphics library utility tools. Like with the processing demo, there's one method that executes once on setup. It's really confusing to him because he doesn't know what any of these things are doing. But it only happens once, so that's fine. He can just ignore it for now. Then he looks at the method that is going to execute every single time. And you should not do this, but the method that comes up the most often in really basic opengl tutorials to draw a cube is to draw six quadrilaterals, one for each side. Now, you should not do this if you're going to do any high performance stuff, but it's kind of the simple way of drawing a cube. But to do this, you have to draw four vertexes, one for each corner of each quadrilateral. And this is what the entire code to draw a cube looks like. And then you can create a window using the utility tools. So our undergrad looks at the example. It's like, okay, well, this makes sense. This is what I expected to be drawn. And then he goes off and says, well, I'll just figure out how to draw spheres with this. So he goes to the opengl group and downloads the opengl cheat sheet. Now, the problem with that is that the opengl cheat sheet is not actually an opengl cheat sheet. It's not a single sheet. It is actually a 13-page Adobe PDF document. And the first page in nine-point font is everything just about buffers. Now, keep in mind that this is a computational science student and not a computer science student. He has only had maybe three months at most of programming, and he does not even know what a pointer is. And to be honest, as a Python user, I don't really like to use pointers either, so I can understand his woes. So this makes him upset. He goes back to Google. He looks again. He finds a module called Piglet. So Piglet is also an opengl binding, but it has submodules for windowing in multimedia. And it can produce the same sorts of animations that PyOpengl does. So the undergrad looks at the code. It all seems logical. The imports look logical. The code to actually update his animation and draw a window looks really nice. Like, he can understand what a window is. He can understand having a clock that he has to schedule to update, and he can understand running his app. So this makes him happy. So then he's like, okay, great. I'm going to continue with this. Let's look at the actual code for drawing a cube. And he sees exactly the same thing that he saw in PyOpengl. And now on top of that, there's the scary at symbol because he has not been taught decorators, and he doesn't even want to try to figure out what that is. And again, the actual drawing of the cube is the same. So this makes him unhappy. All right. So the final option he comes across is called vPython. So just in background on vPython, it was created by David Sherer in 2000 to replace CT, which is what they were using to teach computational physics at Carnegie Mellon. It is currently maintained by Bruce Sherwood, who was one of the original developers of CT. And it was designed specifically for helping students learn physics through programming. How it works is that all of the actual OpenGL calls are written in C++, and that gets compiled to a module called C Visual. And then the actual windowing is based on WX Python and sort of just hooks into this compiled Python module. And just as an example of how this is used in modern, in classes that are going on today, this is some vPython for teaching students Newton's second law, F equals MA. If you remember back to your high school physics, you might have had a similar lab to this that involved launching beanbags at your classmates. This is a great innovation because it prevents any beanbag launching related injuries. But basically you create a ball, you add a velocity to it, you apply the force of gravity to it, and then you integrate the force of gravity onto the velocity. And then you can also demonstrate the additivity of forces by subtracting a drag force. So just to give you an example, this is what it would actually look like on your screen, and you can see what the effect of drag is. No beanbags hurt in the process. All right. So the undergrad looks up the code for drawing a red spinning cube, and it is five lines. It's super nice. And it all makes sense to him. And on his Windows laptop, it looks like this. So based on that, he creates his visualization. It all looks good on his laptop. And then he uploads it to the physics marking server, which is running Ubuntu. And a few minutes later, he gets an email from his prof while he's still in this euphoric mood that says, there is no Debian package for WX Python 3. Can you please come to my office and show me your animation? And now this is a problem for the undergrad meme because his physics prof is a nice guy, but he tends to go on about how great physics and programming is, and he really just wants to get back on the soccer field and continue his quidditch practice. So that ultimately makes him upset. So what I'm trying to communicate in this user story is that writing really simple graphics libraries that are kind of platform independent is really hard. So what if we had an open GL helper? So the idea here is, V Python established a really nice API. So what if we extract that from the C++ code? But let some other project, some other Python project that has already done a lot of hard work on making open GL cross platform Python hooks do the heavy lifting. And then the final thing we want to do is obscure the windowing. So this is one advantage that Piglet has over PyOpenGL is that they make the windowing really easy. All right. So the first thing that I have to do if I'm going to build this thing is decide which open GL binding I'm going to use. And PyOpenGL has support for a lot more of open GL, and it also currently has support for 3.5, while as Piglet appears to be kind of a dead project right now. And so this is where I made the first fatal flaw in creating this library because I picked Piglet. And the reasons I did that were just number one, it has much better documentation, something to keep in mind when I'm writing my own stuff. And secondly, I was kind of an open GL noob at the start, and the extra app and window libraries make things really easy. All right. So this is the package that I created. It's a helper for Piglet or Piglet helper. It has a sub module called objects which contains geometric primitives, and it has a sub module called utils that has mathematical primitives for doing transformations. And I let Piglet.gl interact with open GL, and then I use Piglet's window and app things to handle the display. All right. So I'm going to do a few just comments on implementation before I just go off and show the animated GIFs. So converting from C++ to Python is not, unfortunately, is not as easy as just doing a search replace on curly braces. There are several things you can do in C++ that you cannot do in Python, for example, having multiple method declarations. And the way that I chose to do this and still have the same functionality is to use the get item, set item, and length operators on classes. So just as in the previous slide, I could initialize a vector by passing it three zeros, or sorry, passing it three numbers, passing it nothing, which would initialize it to zero, or passing it another pointer to an array. I can pass it any object which has length greater than two or three, and then assign the first element to x, second element to y, third element to z. Now, the second fatal flaw that I did was decide to go with pep styling on this project. And so this project was built back in 2001 where I think, I don't even think pep 8 was written back then. So that meant that I changed everything slightly, so it doesn't like lower case object names, it also doesn't like single character properties. So these are the types of changes I made. And now, given that I'm making all these changes to the API, how do I make sure things don't break? And since this is a beginner talk, I'm just going to explain how to use continuous integration. So I use Travis CI, which is free as in beer, but not free as in speech. So every time I commit a new change to GitHub, it runs set up pi for Python 2.7 through 3.4, it pilots, and then it runs the unit tests. Now, this provides a unique challenge in this situation, which is that Travis CI runs on a headless server, but open GL requires a display to draw to, even if you just want to load the modules. And this is something that I had trouble with because my naive solution is just, well, let's render to a still image and then we'll just compare the images from previous builds to this current build. However, I'm having trouble with that because it seems like I should be able to use X dummy, but that requires the NVIDIA X driver, and then the NVIDIA X driver doesn't seem to work correctly on headless systems, but I know this should be theoretically possible because other projects have done these things, and if you have done this before, please talk to me afterwards. So my alternative right now is just to mock GL. So in the mock module in Python, there is a useful decorator called patch, and what that allows you to do is that in your unit test, you can just replace any time that an object is called, you replace it with another object. So in box, box will import piglet.gl as GL, and then in my patch, I'm just going to replace GL with my own fake GL, and then my own fake GL module has all the function primitives, but it does essentially nothing. Now, this is kind of cheating in a way, but I don't, I'm not really testing open GL. I'm really just testing that my modules all work when they call each other, and that everything imports correctly. All right. So now that I've talked about some boring implementation details, I'm just going to go over some functionality. So these are the geometric primitives implemented. There's an arrow, sphere, cube, cone, cylinder, ellipsoid, ring, and pyramid. And then there's also things called lights and scenes. So a scene object contains information about the camera position and some details about the default lights. The lights are non-rendable objects that you can stick in the scene. And then the objects can either be drawn into the scene by passing them to the scene as an, or sorry, the scene has a list of objects. You can either add the object to that list, or you can pass the scene to the object's draw method. So just to clarify what I mean, I have a box and a scene. I can either render the box in the scene by passing scene as an argument to render, or by appending the box to the scene and then setting up the scene. All right. So just to finalize, this is what the freshman's code would look like in Pig What Helper. So I've, in addition to everything else, just created a default scene with a default camera position and some default lights. And you can just create that using VSetup, and it will automatically add any objects you call after that to that scene. You define an update function just as you would with any of the other ones, and then you can run the scene. And I've also added a really useful utility for generating objects. And this is really great if you want to spam your Twitter profile with animated GIFs. And that is what it looks like. All right. So I started this project last year after getting really frustrated about getting visual working with Python 3. The original developers of the Python had the same problem, but they decided to take a different approach, which was, since they do mostly teaching stuff, to switch to a Jupyter Notebooks type environment. So they switched from OpenGL to WebGL, and they created this JavaScript library called Glowscript, which allows you to use the Python syntax in WebGL. And then they created this website called Glowscript.org, where you can type in Python code and have it render on the Web. However, just like processing in Python mode, this doesn't give you access to everything else in Python. And then the other module they created was called the Python Jupyter. So they use 0MQ to actually do the communication and to feed some JSON back into Jupyter to render in the Jupyter Notebook. And they're using the actual Glowscript JavaScript library that they've hosted on Glowscript.org to actually render this. And this is what it looks like. It's not quite the same because you have to define these scene variables, but again, it looks very, very similar to VPython Classic Edition. All right. So for future work on Piglet Helper, so I made two fatal flaws, as I said. The first one is that I made styling changes to match modern styling. And I think this was a really bad idea because the people who use this library all operate out of textbooks, and textbooks don't update as fast as the Internet. So it doesn't make sense to make a library that doesn't match the old API. So I think my first step is going to be to undo that. I also haven't implemented 3D text, which is something that is available in VPython. And then finally, I think I need to switch off of Piglet because it does not have support for Python 3.5. And now, a few days before the conference, it came to my attention that Vispy probably could do a very similar thing. And a year ago, when I started this, I looked into Vispy, and they had not yet implemented some objects trying, but now they have. So if you're looking for something that is more fully featured, that is probably a lot more stable. And another benefit is that it implements, it works with several different backends. So it works with PyQt4, PyQt5, and Piglet and PyOpenGL with a little bit of work. You should really check out Vispy. So this is the sample rotating cube code that I have found that is the simplest, I think, for Vispy. And as you can see, it's not quite as dead easy as you might want it to be, but it's still very, very nice. So that's what I would suggest you use instead of my library. But just as a final experiment, I really want to try to communicate what sort of wonderful things can happen when you lower the barriers of entry to 3D drawing, just like what processing has been able to do. Because a lot of us here in the audience are experts in a certain domain, but we probably aren't experts in OpenGL, but we are still humans, and humans have eyeballs, and humans like looking at things. So just as a crazy experiment, a few days before the conference, I decided what if I feed Piglet Helper into the distributed evolutionary algorithms project and use scikit image to judge the similarity of images generated. So the idea is I'm going to try to generate the Python logo using Piglet Helper attached to an evolutionary algorithm. And so this is what you see. It's not the greatest because of the artificial limitations I put onto the search space for the evolutionary algorithm, but you can see that it eventually learns from just a mix of chaos to put yellow spheres in the bottom half and blue spheres in the top half. Anyway, thank you. Next question. So you said Piglet is kind of a dead project. Have you got in touch with the developers or saw the mailing list because there's appearing to be some activity around Piglet now? Oh, no, I hadn't. But that's very exciting to hear. Did you also have a look at game engines because they might provide similar functionalities or maybe sometimes an easier package? Can you repeat what that? Did you have a look at game engines? There are some game engines. Oh, game engines. Yeah, I looked into Pygame and it had a lot of cruft that I wasn't particularly interested in for game development, but perhaps I should look at it again. Anymore? No, that's not. Hi, thanks for the talk. It's really interesting. You've used the kind of college university student as the target. I wonder what are your thoughts for creating 3D graphics for somebody who might be seven years old? So the reason that I use a university undergraduate student is that in Canada, where I'm from, we have really terrible computer science education. So even though students, I believe personally that students who are seven who can read English would totally be able to use V-Python, typically students don't encounter their first programming class until their first year of university. One more question to you, Haiz Bither. What do you think about processing Python mode? So I should have just pressed the home button, but whatever, my finger will get a workout. So the first one that I went over was processing in Python mode. And as I said, this didn't really work for the application because you aren't able to access the full Python stack. And I've looked into trying to incorporate the Python stack into processing in Python mode, and the thing is that I hate Java way too much to invest any more time in that. Thank you, Catherine. Thank you all.
Catherine Holloway - Simplifying Computer Art in Python The Processing project demonstrated that computer art can attract a wider audience to programming. Python has a robust catalog of libraries, including two interfaces to OpenGL. However, none of these libraries replicate Processing’s simplicity when drawing to the screen. I will present my solution to this problem: a re- implementation of VPython’s visual module purely in python called PygletHelper. ----- Processing is a programming language originally developed by the MIT media lab with the goal of allowing artists, educators, and many others develop striking computer generated or assisted projects without requiring deep knowledge of software engineering or computer graphics. Like Processing, Python has become a favourite language of users from diverse backgrounds, such as web development, education, and science. Unlike Processing, python lacks a simple and easy to use library for drawing shapes. Python’s existing libraries for scientific computing and data analysis could be made even more awesome when combined with a simple drawing library. VPython contains a module called visual that established a simple API and convention for drawing shapes, however it was written in C++, prior to the development of pyglet, and thus is not entirely cross- platform. In this talk, I will demonstrate my solution to this problem: a re-implementation of visual purely in Python called PygletHelper. Pyglet, an existing python library, provides a python interface to OpenGL. PygletHelper is built on pyglet but obscures all of the OpenGL calls, such that the user can draw simple geometric shapes to the screen and animate them without needing to know about computer graphics terminology, memory usage, or C data types. I will also show some need visualizations of science and music in my talk, as well as the graphical glitches encountered implementing the library.
10.5446/21117 (DOI)
Hi, welcome in this last session the pie charm room for today our first speaker is Christian thriving He's gonna talk about getting control of your workflows with airflow So hi welcome to my talk yeah getting control of your workflows with airflow I'm president reading and I'm working as a software developer at Blue yander So we have the booth also here if you're interested later on just drop by and ask any questions So Imagine the following scenario which I know personally from my daily life You are at a data-driven company Each night you get data from your customers and this data wants to be processed. That's how you make money Processing happens in separate steps So for example, you have to take care that this data is wallet you have to book that data You apply some machine learning steps. You have to take decisions based on the results of the machine learning And if errors happen, then you need to get an overview of what happened. Why did it happen? When did it happen? especially since most of this Stuff is running at night and you need to see it next morning What possibly went wrong? And as you already might have guessed we have tight time schedule so time does matter and processing time What options do you have to work in there to to realize such a scenario? The first thing that comes to mind of most developers is doing it with gron and we also had many projects where we started with that It's a great way to start it works out of the box But you only have time triggers. You cannot say this gron job depends on that gron job Please start afterwards. You just say at some time Start so for example at 22 o'clock book your data at midnight do the predict run and at 2 o'clock do the decide run Besides that the error handling also is hard. You always have to search for the correct log files when something went wrong So now as I said we have a tight time schedule and you would like to get Finished earlier. So as we see each of these steps roughly runs around one and a half hours So why not compressing that so we could do better We could here start the predict at 20 a half before midnight and at one o'clock start to decide works most of the time but sometimes your database is slow Sometimes you have other issues and then one run takes longer longer here the book data run Maybe takes 10 minutes longer. The data is not there the predict run fails the decision run fails You're completely run fails, which is very bad when you discover it the next morning because your customer cannot get the data He wants so that's an issue with gron because of that when we used the gron mechanism We always had buffers and yeah, if the schedule was not too tight that worked fine But what about the next step our customer sends more data the processing time gets longer And we need to find better solutions. Why not writing our own tool? It's so simple We just have to check that the first run stopped and that the next run will start That cannot be that hard and the start is very easy in multiple projects We did that and it worked for the first steps, but afterwards you see the limit soon You have maybe concurrency that have multiple tasks running at once you need to know why what task failed You might not only wanting to do a timely triggers, but also trigger manual things afterwards You might have want a new eye or an external endpoint At that point you have to take a decision either you can accept the limits That's fine or your own work for implement a implementation gets much more complex than you thought initially so You are stuck We were in that situation also we wanted to harmonize all these workflow tools we had in our different projects And we had to look at several workflow at several open source implementations. There are many interesting things with many different properties, so for example, we also had to look at Spotify Luigi But this was more an HDFS base tool which was not in our technology stack And also several other tools in the end we decided for Airflow, which is an open source project initiated by Airbnb Therefore the name Airflow What did we decide for that? Well the tool itself is written in Python. We know that we like it And the one thing that was really cool is that the workflows are defined in Python code So they are not sitting in some JSON files not sitting in some database rows But really each workflow is a Python code you can enter it in your version control system you get all versioning and That's really very good way of managing that it has Most of the features I said you will run within the limitations So you can have a look at the present at the past runs. It has logging features It's great that it's extensible so you can write your own extensions in Python code and plug it in without having to modify The open source code, but it detects these plugins are tell moral on that later It is an underactive development. So at the moment, it's an Apache incubator project and people are Reacting on the pull request so there's lots of traffic in there and you can see that it gets further It has a nice UI which I will show you You can define your own rest interface and it's relatively lightweight you have two processors on a server and you need a database to store that information How does the workflow look like this is the Python code I talked about Mainly it is dynamic as a click no not a directed as a click graph each watch each workflow And you instantiate it you get some parameters and you get some parameters and you get some parameters You give some parameters like when is the first run what scheduling do you have you can give that in constant tags on other? in time deltas also And you can define your workflow steps as operators. So here I tell more about operators later But we're three steps. We're doing we are booking the data we are predicting and we are we take the decision and The connection between these steps you do why are the set upstream? So you say before the predict happens the book data needs to happen and before the decide happens the predict needs to happen So this should be the graphic doesn't work here, but okay. Let's go to the next complex stuff. So maybe you want to have a fan in fan out We have more data and the brick predictor on takes longer We want to parallelize that and maybe we say we do some prediction for German customers and some prediction for the UK locations So by that I can say if you predict Germany predict UK Both depend on the booking of the data and the decision depends on both of them So it's very nicely to describe and it will give you that graph directly by that you can build arbitrary complex workflows you also have the possibility for decisions and for switches, but at least for us we did not need them up to now So most of our workflows are quite linear. I'll just with a few branches in there So how does the nice you I look like I already promised you You have here an overview where you see what workflows do you have? What is the schedule of them and also what are the statuses? Most recently so it's a little bit Small to read but you have here saying which tasks how many tasks have run correctly How many tasks are running currently also you can see what are erroneous and what are currently up for retry? You can run each view each you can have a look at each deck run Explicitly so you see here the sequence this is color coded also that you can see when which step was successful Which is currently running erroneous and so forth. So this is a run that did not start So there's just a scheduled but starting was not done up to now The tree view shows you an overview of all the runs so here you see each Each column is a run day. So you see for each day here these three days went correctly all green and The last run currently ahead an issue here within the second step. It's yellow. This means it's up for retry So you get a nice overview on how did it behave in the past and currently? also which helps and that is a runtime view which for which you can see for example performance degradation where we see here we have three runs and These colors are all different tasks. So let's say this is the booking data step The blue one this is the prediction step and this is the decision step and you see one behave the same and The other two changed over time so very useful for seeing which of the steps might have taken longer You can see each run also as a gun charge to see when was each step happening And You have a log view which Which really is useful where you can output things like unfortunately. It's a little bit smaller here It says this is the decision task has started a job and the back end system and the job ID is 17 and the stator and the next The next iteration it asked what is the status now and then we see it is finished But that you can see how each task was processed Now what are the building blocks of your workflows these are operators and there are already many operators delivered in airflow as an example You can operate you can start things on the bash You can start things with HTTP request you can execute statements on databases You can write directly Python code which is executed or you can send mail So and this is just a few examples there are more in the in airflow delivered They're not only these operators, but also sensors sensors are Our steps and your workflow that wait for things so an HTTP sensor could for example always Query and URL and ask whether it is finished or what is the status on that and based on that it will wait Or it will proceed in the workflow in the same way an HDFS sensor could check for files on the file system And then SQL sensor could check for values in the database Many things already you can do with these operators, but then might be situations when you need more For example for us. We had an asynchronous processing in our back end systems So we had here our airflow system. We had our back end system For example the machine learning system for the predictions. We wanted to start a job So we trigger and we trigger an HTTP request there we get back a job ID and Then we let it run for five minutes half an hour or so And we constantly ask whether it is finished or not and when it is finished we can start the next job This would be possible to do already with standard methods of airflow So we could use the simple HTTP operator to start it and the sensor as I described to wait until it is finished This works, but it has the disadvantage that you don't see directly how long did it take So you remember the last view with the runtimes I would like to see how long did my decision take and therefore I wanted this Step decide has a certain length the length of this is the length that it took on the back end system So this is possible. We can do this with a new operator. I Want to explain each line in detail also you can find afterwards This as a complete airflow example plug-in on a GitHub people which you can see afterwards and can check for each line So we have an HTTP collection defined We have some some endpoint decide that we can trigger that and then deliver as a job ID and we have a job status We can ask when we have to drop by the water status of that so within the execution we run the post on the decide to get back the job ID then we wait for the job with the job ID and Once the status is finished we are done and then within the airflow database We know how long did this decision step take Now how do you get these operators into your system as I said We don't want to modify the code the airflow code directly, but we can do this in a Python package We can we can say we have this plug-in that has some own operators that has some flask blueprints and Lay that in our file system and in the airflow configuration We just can say your plugin is here and your workflow definitions are there and on the start of airflow It will detect them automatically Also that plug-in Is defined in Python as you can see here. We have the airflow plug-in manager and you just say Inherit from that airflow plug-in we have our euro Python plug-in. This has three operators. I need and also a blueprint What is it about that blueprint why do you need that? We had The requirement we wanted to have an endpoint to talk and arrest a style with our airflow system So that we can also programmatically say I want manually to start a trigger. I want to know is a Daggeron finished or not? this functionality was not there in An airflow, but you can write it as a flask blueprint. You can define that endpoints and It is detected automatically and added within the web server also this you will see in the example repo How would such a rest endpoint look like we have here the airflow server running on part 8080 We have defined this endpoint trigger and we say we give the name of the workflow Which is daily processing and we get back the name of the workflow and a run ID Which we can use afterwards to ask for the status. So this works fine Now what happens inside of airflow it works with two processes it Had at least two processes I should say we have a scheduling process that takes care When each job should run and we have a web server that gives to you I and all the other blueprints Also, you need database several databases are supported. We are using at our company the postgres and sq light As you light currently has a restriction that you cannot run parallel tasks on them But we are using the sq light more for the development testing stuff So this is fine and for production you can use a postgres and there you don't have that limitation You can also look how do you want your tasks to be executed? We are using most of the time just HTTP requests. We are saying we trigger a task in the backend system We're waiting until it is finished. So the airflow system itself. There's no high workload on that So we are happy that this runs within one schedule within the scheduler process directly Or one we want to have multiple tasks in parallel. We work with sub processes But it's also possible if you trigger the stuff wire bash scripts or similar things that you want more power behind the executor nodes itself And to do that you can use a salary which is a framework with multiple worker nodes and you can use that There is already a connection from airflow to salary Yeah, how we use it most of the things are already mentioned in the meantime We used the automatic schedules and we have manual triggers. We use one airflow instance per system We manage so we also had how do we that connection? Do we have one central company airflow instance or one airflow instance per system and for us? It was easier to do it that way Databases we use postgres and sq light the executor select weight and also we are contributing to airflow This is really good. That works fine this external triggers that you can trigger them manually They were not there one year ago and we needed them definitely before using airflow So we wrote a pull request that was also Worked with and now within these two pull requests. This is an airflow and we are also have some Necessary functionality for the plugin detection. So we also open pull request there and there's an active communication with the community With all these good things about airflow there are at least a few challenges I want to make you aware of because these were things we struggled a little bit and also with the project teams using airflow At our side. This is has to do with how is scheduling handled and how is the start time interpret interpreted? so scheduling There are two dates that are important for that. It's the start date This means when did the processing of this task of this workflow start on the server? So that's quite easy. It's the time of the server But there's also an execution date that is quite prominently shown on the UI and that sometimes shows strange values These values are consistent and they are explainable, but they are not always obvious The reason as the history from airflow. So this was used in ETL scenarios. So this extract transform load and this means that for eat they wanted to process daily data which was which was coming in the whole day long So let's say on 19th of July the whole day data came in and then you wanted to process that data for the 19th of July And when can you process that data? You can process it only after the 19th, which is the 20th So let's say today. So today this task of data processing runs and what is the execution date? It's the 19th So it's always one iteration back. This is because they said originally well, this is more a description This is the data from the 19th. Therefore, it's the execution date That's fine when you know that it does not scare you that you think the system is doing why things but that's consistent But yeah, you have to get used to that We have some workflow starting in a weekly schedule Which means when I trigger that now it gives me the start date of Monday the week before Also that it's consistent, but you need to get used to that Then we have to start date you might remember for 10 minutes ago that we give a start date for each workflow and If the workflow is scheduled automatically and you start the server it will know That it has to fill up tasks. So when we say A start date is today 20th of July No, we start the server at 20th of July the scheduler and a start date We have given the 17th of July then it will detect that there are some runs missing and it will fill these runs automatically So it will first trigger the run for this with execution dates 17th Then execution date 18th and just a regular run then the 19th will be processed at the current at the correct point of time You need to check whether this is applicable for you. So when you have these things I would need really to process this data. That's fine when you have more the thing I want to trigger something in the back end and I need to trigger it just once because this back end job will take care of All cleaning up that stuff then this is a little bit strange and can lead to issues when you trigger it too much But you can work around with that when you give the correct start time already So you can determine that in code you can determine that in a variable. There are several options I won't discuss them in detail, but this is the thing you should have you should have in mind when you do that When you wonder why does this backfill happen? It's possible to handle that but you need to know the concept behind that If you have some further questions, maybe we can discuss afterwards Okay, and that's it also from my presentation I give you here that's the incubator project for airflow It has a nice documentation which is here also very useful is the common pitfalls page in the airflow wiki They're also the stuff with the execution date is explained in more detail and The plug-in which I have shown you parts of you can find here at our blue yonder repo You can download that you have the steps in the read me on how to use that in your air for instance So that's it from my site any questions? You just have to see the presentation yourself and then show to their, and then see how things work out you Show the way Is it possible to manage? 저희 in this GUI or is it just to display things that you wrote in the code? The workflow definitions itself, you do that in code. So you can view that code from the GUI, but you have to change it in your code editor. Okay, because, well, in our firm, we've got a homemade scheduler and, well, with, I think, 100,000 tasks inside, and is it scalable? Airflow, do you use this amount of tasks in your system? We know we don't have that high data volume in our system. So for us, it's more that we have per systems, these nightly runs that have several tasks, but not a thousand, some millions of that. I've seen in the documentation page from Airbnb, these stacks seem to be much bigger, so it would be worth to ask them what is the limit for that. But we did not reach it up to now. Hi. I would like to ask about the execution date and the run date. Is it possible to configure it? Because we have similar example when you collect data for last month and you want to run it for, for example, 15 days later or in the opposite direction, you want to collect data for next month and run it 50 days before that. Like, is it, if you can configure it, like the delay or maybe even if you can postpone it, if you see, okay, I will run tomorrow, but if I have no data tomorrow, I will retry the day after tomorrow. Well, at first, the logic is not configurable. So this is in the scheduling code itself. Regarding the stuff running two weeks after or two weeks from now, I would say I have no quick answer to that. Maybe we could discuss it afterwards. I think you can do many things with the scheduling. So because the scheduling just helps you when does it run, you also have to post it to the possibility to schedule a run each day and as a first task of the run, decide on whether you really want to run or not. So this might be the first iteration when you say you implement the more complex logic than in your first task, but maybe there's also other things. Okay, thank you. Let's have one more question. Did you evaluate other tools when you decided about Airflow and why did you decide for Airflow? For example, we had to look at Luigi, but that was based on an HDFS stack, which we did not run. So therefore, this was too heavy-weight just to have a workflow system to set up this. We also had to look at several open stack implementations, but their main focus was on doing heavy work lifting with execution processes and how these are distributed. And since we had very lightweight processes but needed more UI features and more possibilities to define our own operators, this also just was not the main focus. When you see these two things create, but the main focus is a different thing, then it's good to have a look at other tools. Because in theory, also Jenkins does a lot of things with some plugins. So if you already have Jenkins, you have to convince your team to use something else. So what could be one thing that you can do that otherwise you cannot do? I mean, Jenkins is great. We also use Jenkins for our integration testing, for scheduling our unit tests, but not for the daily productive runs. Okay. Let's give a big hand to Christian.
Christian Trebing - Get in control of your workflows with Airflow Airflow is an open source Python package from Airbnb to control your workflows. This talk will explain the concepts behind Airflow, demonstrating how to define your own workflows in Python code and how to extend the functionality with new task operators and UI blueprints by developing your own plugins. You'll also get to hear about our experiences at Blue Yonder, using this tool in real-world scenarios. ----- Whenever you work with data, sooner or later you stumble across the definition of your workflows. At what point should you process your customer's data? What subsequent steps are necessary? And what went wrong with your data processing last Saturday night? At Blue Yonder we use Airflow, an open source Python package from Airbnb to solve these problems. It can be extended with new functionality by developing plugins in Python, without the need to fork the repo. With Airflow, we define workflows as directed acyclic graphs and get a shiny UI for free. Airflow comes with some task operators which can be used out of the box to complete certain tasks. For more specific cases, tasks can be developed by the end user. Best of all: even the configuration is done completely in Python!
10.5446/21118 (DOI)
So welcome, Kristi and Michael. Hello, everybody. I'm Kristi Wilson. I'm a senior developer at Demonware, and I'm the team lead of the test tools team. And I'm Michael. I'm also a developer at Demonware, where I focus mainly on test automation and general quality stuff. And we're also from Canada, by the way. So first, a bit about us. We're both from Demonware. We work in the video game industry. We do online services for those video games. And if you want to learn more, come see us in the vendor area afterwards. Today we're going to take you on a journey. It is one of the many tales in Commander McFleffel's and the quest for quality. Today's tale, once upon a system test. There are many stops along the Commander's journey. The tale starts in 2011 at Demonware, where testing needed improvement. Before the Commander could improve testing, the Commander had to learn what testing was. With that newfound understanding, the Commander could understand some tomes containing best practices for system testing. And the Commander picked up a couple allies along the way, Pi test and Docker Pi. Before we go, we want to give you some practical takeaways that hopefully you can use right away. That are tailored specifically for people who do more development work or more operations. So on with the story. So we're going to start the Commander's journey by going over the current state of testing at Demonware. So back in 2011, we had just a gigantic monolithic platform. I mentioned that we did services for online games. What I didn't mention was that it's actually one gigantic service. And in order to test our features in this gigantic service, we would do things like test and production or test manually in our local development environments. And we even tried to ease the burden of spinning up new test environments by making really complicated bascripts that were really unmaintainable. Now it's 2016 and Demonware has caught up with the microservice craze. So instead of having one monolith, we have a whole bunch of microservices with complicated dependencies between them. The sad thing is it turns out it's actually easier to test the monolith than it is to test the microservices. So to deal with this additional complexity, we now have a team dedicated to test tooling. And instead of relying on just unit tests, we have unit integration and system tests. So to go on with the Commander's journey, next they had to find out what testing actually was. But we actually found it very difficult to define testing in that way. So instead we're going to focus on why we do test instead. So why do we test? First and foremost, we test in order to increase our own confidence that our software actually does what we expected to do. When we write a test, we're actually codifying the intended behavior of our application in code. And so we can go back and it's a good way to see as our software evolves how it's supposed to work. And also, as we continue to run these tests, we can easily catch bugs that maybe are reintroduced as time goes on. We also want to clear up some common misconceptions about testing. Some people think that we test in order to find all of the bugs in our software, but it's actually pretty much impossible. No matter what you do, there are going to be some bugs that you don't find. The only bugs you're going to find are the ones that you already know how to look for. Yeah. So to illustrate this, you see behind the Commander, there is a pink bug with a bleeding heart. For those of you who don't know, that's the heart bleed bug. And as Christy was saying, it's unlikely that we'll actually be able to catch this bug before it affects us, just because when we write our tests, we already have the bugs in mind. And as you can see, it's conveniently placed behind the Commander's case. Another misconception about testing is that testing improves the quality of the software. The tests themselves don't actually improve the quality of the software. By the time you run the test, the software already has whatever bugs it's going to have. If you want higher quality software, the place to do that is during the requirements gathering or the design. But testing gives you information about the quality of your software. And software that isn't tested is usually viewed as lower quality because there's less information available about it. So time for example. Suppose we have a simple service, let's say it's a cat matchmaking service to go with the cat theme. And the goal of the service is to help cats find other cats to play video games with. So the cats themselves will talk with the service using a client library and the service itself will store state inside of a database. So how should we test this? As we alluded to before, ideally you have at least these three types of tests. Unit tests, integration tests, and system tests. So unit tests. We're going to use unit tests to provide almost 100% coverage of the library and the service itself. Unit tests are the fastest tests to run. They're the easiest to write and they're the easiest to maintain. So we're going to cover pretty much everything with unit tests. We're going to shoot for 100% coverage. We're probably not going to make it because it's not really reasonable, but we're still going to go for it. We also have some integration tests. In this case they will test the interaction between the service and the database because up until now we've been testing each of the components of our service in isolation. And now the system tests, which is the main thing that we're here to talk about today. So system tests test the entire system from the perspective of the end user. They're the most valuable tests because they actually use your system the way the user does and they're the most likely to find bugs. On the other hand, they're the hardest to write and the most complicated to run and the slowest because of all the setup that's required. So for our cat matchmaking service, we're going to rely mostly on the unit tests and the integration tests to do our coverage. We test all of our tiny, well-factor components with unit tests. We cover the gaps between the service and the database with the integration tests and then we add just the sprinkling of system tests, just a couple happy path tests and maybe a few error cases. So with the commander's new found knowledge of testing, they were finally able to decipher those ancient tomes that they just found laying around. The tomes were riddled with phrases like ship it and dock or dock her. While the commander was taken aback by this arcane terminology, our intrepid hero carried on anyway and in the process learned about some best practices for system testing. So the best practices. The first one is that you should be giving your tests a fresh test environment, like whenever they run. This will help avoid dependencies between your tests. So for example, it should not matter what any individual test does because you get a new environment for each test. And as we'll see in a bit, if you have a dockerized environment, it makes it even easier to achieve this ideal of having a fresh test environment. It's also important to make sure that your tests can easily run both on your build servers and locally. So you want them to be on the build servers so that the continuous integration is making sure that they work over time, but you have to make sure that if there's a problem and somebody needs to debug something, they can really easily run the test locally as well. Another important detail is to restrict the environments that you support. If you start using docker, you might be under the impression that docker runs the same way everywhere, but it actually runs very differently, say if you're using Ubuntu or if you're using something like the docker beta for Windows, it actually behaves quite differently. And then if you allow people to use all of those environments, you end up supporting a lot of obscure problems that are just specific to their environments. So some more best practices. As I mentioned, your test should be running on a fresh state, but they should also be cleaning up after themselves, right? Because for everything that your test leaves running after it's completed, it just puts extra burden on the person actually doing the testing and will probably make them less likely to run your tests in the future or want to run your tests. Additionally, your tests should both fail fast and informatively in order to reduce the time it takes to identify a problem and also to react to it, overall tightening the dev cycle. Just a quick note about Glue Code. If you're writing really well-factored tiny bits of functionality that do one thing and do one thing well, at some point you're going to have to bring all those things together somewhere. We often refer to this as Glue Code. So this example code here is just using a bunch of other modules and calling into them. If you've written unit tests before, you know that if you want a unit test this, you have to create a whole bunch of mock objects, and then you have to sort of model these complicated dependencies between them. The test that results from that is often very hard to write, it's really hard to maintain, and it doesn't really add anything. So for this kind of code, we recommend skipping the unit tests altogether and just covering it with system tests. So as the commander went along their journey some more, they came across two allies that promised to help make system testing a lot easier to do. The first ally was PyTest. So for those of you who don't know, PyTest is an alternative Python testing library. This is in contrast to the built-in unit test library in the Python standard library. You'll see that when you write tests with PyTest, you'll find that there's less code overall, they're very minimal, but also PyTest comes with a lot more features built in by default. However, they are optional, so you can use them whenever you'd like to or if you'd like to. The main thing we'll be talking about today is PyTest fixtures. In PyTest, a fixture is simply a nice way of defining some setup and some tear-down logic for some state that your test requires. PyTest will ensure that the setup and tear-down are called in that order for each of your tests, which is very important. And as we'll see, when you system test, you generally need to set up a lot of states, right, because you might have a really complicated application that you're testing. So time for example. So on the left here, we have two green boxes. These are the setup and tear-down for the fixture, and on the right, there's a test. I mentioned that PyTest will make sure that setup is run before your test and tear-down after your test. And by default, it'll actually do this for every single test you write, so it makes it very easily to achieve that clean state ideal. You can also change when PyTest will call setup and tear-down. So in this example with the yellow fixture, the setup being called before each test is actually called once before any of your tests are run, and tear-down is also called once after all your tests are run. And you can even actually combine these two together to create a more complicated setup if your application requires it. And now a little bit about Docker. At Demonware, our services are fairly hard to set up and run, so to make this easier, we put them into Docker containers. When we started to write tests that use these containers, at first we wrote complicated batch scripts that did the setup and the tear-down, but it wasn't very maintainable. So then the commander found their next ally, DockerPy. DockerPy is a Python library for using Docker. The interface, however, has a one-to-one mapping with the REST interface, so it's a little bit clunky, and I'll demonstrate what that looks like. So in pseudo code, I'm going to show you some code that you would use with DockerPy to create a client object, pull an image, create a container, start a container, and then remove a container. If you've used the Docker command line at all, you would know that steps two through four are usually just the Docker run command, but you don't get that same convenience with DockerPy. You have to be more explicit. So first, we're going to create the Docker client object. Also, if you're interested in using any of this code, there's a link at the bottom of all of our slides that goes to a GitHub repo that has all the example code in it, so especially when the examples get a bit longer, if you actually want to take a look at it in more detail, just go to that URL. So with this example code, right off the bat, you can see that this would only work on a system that has Unix sockets, so restricting the environments becomes pretty important. The other caveat is that we're passing this flag to automatically detect the version of the server so that we don't have to keep the client and server in sync. Next, we're going to pull the image that we want to run. On the Docker command line, if you don't specify a tag, it'll default to the latest. The DockerPy does not do this, and instead, it will actually pull the entire repository, so you have to be explicit about the tag you want. Another caveat is that the DockerPy will often not raise exceptions in cases that you think it would. You think if it failed to pull the image, it would raise an exception, but actually you have to parse that out of the response yourself, so that's something that's important to be aware of. Then we're going to create a container. The important detail here is that we're adding a special label to it, so what we do with our tests is we add the same label to all the containers that we start in our test, and then we can do some fancy things like dumping the logs from all the containers after the tests are over. Then we start the container, and then when we're done with it, we stop it and remove it. So, time for more concrete example of DockerPy and PyTest working together. I've replaced the generic set up and tear down in the green boxes with create container and delete container. An even more concrete example, suppose that we have a web service, which is indicated by the yellow box here, that has no state by itself. It stores a state in a database somewhere, so we want to test that. So, we'll spin it up once because it has no states, but it does store its states inside of two database containers, Redis and MySQL. So we're going to have a second set of fixtures, which get set up and tore down for each individual test that will give us new databases each time. Here's an example of a simple PyTest fixture. This does what Christie was describing earlier. It creates the Docker client, starts the container, and then it tears down the container at the end. The main thing to note here, if you can see it, this is also in our repo, is the yield. In the yield, we actually are returning the IP address of the newly started container, and it might not be apparent here, but we're actually able to use the IP address inside of our test that uses this fixture. PyTest also has a very elaborate hook system, which lets you modify the default behavior of PyTest, and we actually use this to dump the logs of all of our containers at the end of the test run. So this particular hook is the log report hook, which is executed whenever PyTest wants to dump the test report somewhere. And in the event that the test run has failed, we'll actually want to go through each of the containers that have our special label, and we'll dump out all of the logs from that to center output. And we were pretty impressed by this. So if you've used Docker at all, you might be wondering about Docker compose. Could you use Docker compose instead? It seems like it gives you very similar functionality to what we're doing. So yes, you can, and it works really well, especially if you want to use exactly the same setup for every single test that you're running. So if the cluster of services you're running for each test is the same, Docker compose makes a lot of sense. And if it's dynamic, if you're doing things like changing the volumes that are mounted or changing the port mapping or doing anything more complicated than something like Docker pie, it makes a bit more sense. If you do decide to use Docker compose, it still fits in really well with PyTest fixtures. So you can have a fixture that does the Docker compose up and then does the Docker compose down. And then you can also use Docker pie to inspect some of the containers and get information out of them if you need it. So this is another example of what that would look like. So this is a fixture that does a Docker compose up, then yields the IP address of one of the containers that started and then tears down the cluster. Again, the example code is up in our repo if you're interested in using it. We also encountered a few important gotchas along the way. One of them is that Docker has no notion of a service actually being able to receive requests. So sometimes tests will fail because the service in the container is actually still starting. So you can get around this by having an executable inside all your containers that you can call from the outside that says whether the service is ready for requests. And you can use backoffs. And there's a couple of Python libraries, backoff and retry, which make this really easy. It's also important to make sure that your containers start up as quickly as possible. Something that we've kind of learned the hard way. The slower the containers start, the slower the tests will run and the slower people get feedback on their code and the slower your development time will be and this will lower the overall quality of your software. So time to wrap it up, sort of. So the commander has had a long and arduous journey but has gained a lot of knowledge along the way. So next they'd like to share with you some of the takeaways they've gotten for both the Dev and Ops perspectives of testing. So you might be thinking, that was kind of cool, but what do I do with it? So we're hoping that we can give you some specific things that you could try when you're back developing. So if you do more development work, it's really cool to know how to write tests and writing tests is great, but sometimes it's also even more important to know when not to write tests. If you're going to use system tests, use them sparingly. That being said, the next time you have a feature to develop, try some test-driven development. Try starting with a system test. If you don't have any system tests with the software that you're working on, try introducing one for each piece of software that you own. Make sure that it can run with as little setup as possible and that it runs as quickly as possible and then add it to some kind of continuous integration system. If you already have tests, take a critical look at them. Do you actually need all the tests that you have? Are some of them retesting functionality that the unit and integration tests already cover and can you remove them? And can you make them any faster? And the same things to apply to Ops-minded FOCA there as well. First of all, you want to know why not to write system tests. So for example, you probably do not need them if you want to just test some one-off scripts, right? Because by their very nature, you do not care whether these one-off scripts keep working into the future. In contrast, if you do have tooling and other scripts that do need to work in the future, then yes, you should definitely have system tests. And you should start by having at least one system test which will exercise enough functionality in your tool to prove to yourself that it works. And as well as with developers, you should also run these regularly so they keep giving you value. Generally for Ops tests, there are like two categories. The first one is tests that involve services that you can run. So for example, before we had a fixture that starts up in my SQL container, that's something that we can run locally. And for that, we recommend using something like the commander described earlier, which is PyTest and Dockerpy. Now for tests that require things that you cannot run like those in Amazon Web Services, for example, you can still use PyTest. But there are some questions you should ask yourself first. So for one, is it feasible to have a short test in this external environment? Is it going to cost you a lot of money? And also, is it easy enough to clean up after yourself in this external environment so you don't run up excess charges or anything? And if you're comfortable with your answers to the questions, then yes, you should definitely run tests for these types of tools, but use them sparingly. So in conclusion, system tests are great. Definitely write system tests. Don't write too many system tests. If you have services that you can run in containers, try checking out PyTest fixtures with Dockerpy and or Docker compose. It works really well. So as the commander's tail comes to a close, they are very content with all the knowledge that they have gained across their journey. And they're looking forward to bringing that knowledge back with them to their own castle. Thanks for listening. Thank you. Time for questions. Hi. Thank you for the talk. Very interesting. I have a lot of questions, but I will try to do just one. When do you test? I mean, are you testing in continuous integration? Do you have dedicated environments? Do you test? Do you do system tests in the developers' machines? All non? Yeah, I guess for us, usually we try to test as much as possible. So like while you're developing the feature that you're working on, you will run the tests. Ideally, you would catch any failures in the unit test stage, you know, because they're faster run, faster than system tests. But definitely we want everyone to be running these tests all the time. And we have them run in like bamboo, for example, all the time just to make sure that they're run. But ideally, your developers would also be running them too. So we have a team that's dedicated to our build infrastructure. So we've been using mostly bamboo and we run all the system tests in bamboo on bamboo agents. And we're slowly migrating over to Jenkins now using agents that run more in the cloud. But we're trying to make sure that these tests will run on developers' machines as well. Do you have more questions for the demonware booth later? Hi. So I'm interested in what's your ratio between unit tests, integration tests and end-to-end tests. That would be the first part of the question and the second part. Why not only use end-to-end tests? Okay. Do you want to do the first part? Okay. So the ratio, I would say, so it depends on, this is more of like an ideal that we're going for with most of our new software. We also have a lot of legacy software that is not, it's basically all sort of unit tests that aren't really unit tests. But what we're aiming for is we'd have like, say like hundreds or even maybe thousands of unit tests to like a handful of system tests, like 10 or like less than 50 system tests to like 1,000 unit tests, you know, something like that. Basically aiming for that 100% coverage and then just testing some of the client-facing end points with the system tests. And for the second part of the question, which I believe was why do you not just run system tests? So as you've probably seen, to spin up your entire software stack, it's very expensive a lot of the time. And it reduces the turnaround time for when someone's actively working on something, right, to test it and make sure it works. So that's why do you want to use system tests sparingly. Although you are right in that, they give you the most benefit because they actually use your software the way it will actually be used. So that's pretty, that's the bottom line really, is how much speed do you want to sacrifice? And usually what we'll do is we'll have more unit tests in order to catch things as early as possible because those are really fast to run before we get to system tests. The other thing is it depends on how many paths there are through your software. So if you have a lot of like branches and then those have branches, then covering all of that with system tests is basically impossible because of the number of cases you'd have to cover. But if you use unit tests at those, for those code modules, you can make sure all of that stuff is covered and it might be completely infeasible with system tests. But some software is better suited for system tests. Like we also write some software specifically for like automating, like testing and deployment. And for some of that stuff, we have pretty much only system tests and no unit tests at all. So it really depends. Hey, thanks for your talk. One question regarding data fixtures. So you spin up the containers, but how do you manage getting the data into those percona or whatever you use for data stores to then be able to test the flow? So I think what we usually do is we have one fixture which spins up the database container and then a second fixture which depends on that fixture which will actually insert the data into it. And you can do that pretty easily with PyTest. So pretty much you just use Python to insert the data before you actually run your test. So in our example fixture, we had it just yielding the IP address of the container, but before that you could do some other setup if you wanted to. And then some other case, if that is not fast enough, in some other cases we build, we have like a base image that has the database in it and then we will regularly build images on top of that that have the data that we need for the test in it. And then the test will just start the container that has the data it needs already. Thank you. I got a question. Have you tried this approach with Victor Machines in non-dockerized environments like background, vSphere? Are you asking if we've used it without Docker? Yes. I don't believe we have, although I don't see any reason why it wouldn't work. It just might be more expensive to spin up a whole new virtual machine versus a Docker container. But you could easily plug in something like Vagrant, I guess, in place of Docker Py in our examples. Or like Bottle for ECQ. So I have a system that's actually pretty similar to that and I have this like small technical problem. My fixtures automatically download any images that you need. And I have this problem that you run tests and then nothing happens for like 30 seconds. Can I write a plug-in, turn my fixtures into a Pytos plug-in so I'm able to just pop a message? Because it's Pytos, it's all of your streams. So that's the way. Or did you do something like that actually? I think your team did that right. We've also had that problem. I don't think we have a great answer for you. In a lot of cases we have the logic so that if the image wasn't pulled, we pull it. And then we have on our build agents, we have a previous step that will always pull the latest image. So it's not, and then we assume that when people are running it locally, they've sort of done the pulling themselves if they want the latest image. It's not a great answer, though. I think there's a lot of opportunity for somebody to write a really good library for using Docker with Pytest test fixtures. So if there was something like that, that would be great functionality to provide. But yeah, I would recommend looking at the hooks. I think that's something that you can add. Or maybe just output from the fixture itself, because you can always output anything you want. It's just that sort of muddies up the Pytest output. Thanks. Any more questions? No. We got time. Okay, thanks for the talk. It was very enjoyable. How would you approach a situation when you have a kind of system you described earlier in the presentation when it's a giant monolithic with very poor coverage and it has only few system tests, but with very fragile? How would you solve the situation? You say the tests themselves are the fragile part? Sorry? We say it's fragile. Is it the system that's fragile or is the test the... No, the systems are fragile because they're based on some hard-coded information. I guess for that, I guess you would start with system tests because you can use those to easily verify that your thing is still working. I guess that doesn't really address the flakiness. That's sort of an ongoing issue in the whole testing space. But if you start with system tests, you can test based on actual customer requirements. And then as you start to refactor and improve the rest of your code base, you can start writing unit tests and integration tests for those. But start with the system tests so that you know that your software is still overall working. For the monolith that we mentioned at the beginning of the presentation, what we have is we still have all of the legacy tests, which were kind of like a weird mix that would reach straight into the internals of the system and call things. So what we're trying to do for new services is we kind of isolate all of those old tests and then for new things, write unit tests and write integration tests and write system tests. So kind of slowly transition over to that and delete the old tests as we go. But it's not an easy solution. Hi. Yeah, again. So I'm interested in how do you handle tests that have a lot of dependencies and are extremely flaky. So are you just repeating the test if they fail like three times and then say, hey, that really failed? Or do we have some other cool strategy? So that's the particular strategy you mentioned of rerunning the test when it fails has actually caused us a lot of problems. So several years ago, we started doing something like that. And because of that, the problems with the tests have kind of been mounting up over time. So at the moment, we're actually at kind of a crisis point where we have to do something serious about it because we've been ignoring these flaky tests for so long. So I would recommend trying as hard as you can to remove the flakiness, like change them to be as deterministic as possible. Often you can achieve that by going with the unit tests, like try to figure out how you can write unit tests that remove the whatever it is that's flaky. Like it might be a random element or something about the file system or something time related. So use unit tests to control that part and remove that from the equation. And then that sometimes makes the test a bit more dependable. But I would say definitely not the rerunning of the tests. Yeah, though there is actually a pie test plugin for rerunning tests automatically. So you could do that, but try this approach first. Last question. Thank you for the talk. I'm interested about organizational way of testing it. How many test developers do you have compared to the developers that are working on the project and how do they interact? So in general, I think we try to have the developer writing the new feature, actually writing the test as well, because they are the experts on the feature. I think in some cases we have tried to pair programming where someone else would write, let's say, the system test, because that's less dependent on the internals and more about the general feature requirements. But usually we do try and have the same person write most, if not all, the tests. We have about 120 developers in the whole company or engineers in general. Michael is the only one who is explicitly software engineer in test. And then I'm on the test tools team and there are four of us all together. So we work on the tooling specifically. And then Michael is trying to help some of the teams that have larger testing concerns. But in general, we're trying to encourage people to all be kind of skilled in writing tests so they can write their own tests and deal with their own problems. Okay, so thanks.
Christie Wilson/Michael Tom-Wing - System Testing with pytest and docker-py System tests are an invaluable tool for verifying correctness of large scale online services. This talk will discuss best practices and tooling (pytest and docker-py) for writing maintainable system tests. Demonware has used System tests to verify online services for some of the biggest AAA video game launches as well as internal operational tools. Many folks who write software are familiar with unit testing, but far fewer with system testing. ----- System testing a microservice architecture is challenging. As we move away from monolithic architectures, system testing becomes more important but also more complicated. In the video game industry, if a game doesn’t work properly immediately after launch, it will heavily impact game success. We have found system testing to be an important tool for pre launch testing of game services and operational tools, to guarantee quality of these services at launch. We want to share with you best practices for system testing: when to write system tests, what to test and what not to, and common pitfalls to avoid. Using python’s pytest tool and docker-py for setting up services and their dependencies has made it easier than ever to write complex but maintainable system tests and we’ll share with you how we’ve made use of them. Developers (senior and junior) and ops folks can walk away from this talk with practical tips they can use to apply system testing to their software.
10.5446/21120 (DOI)
Okay, okay, let's start. Good afternoon. Welcome and thank you for coming. I'm pleased to introduce the engineer, Daniel Pope, who talks about PairGain, Shiro. Hi, I'm Dan. I am a reliability engineer by day, but for many years my hobby has been programming games. I remember my first computer was an Atari ST and I had ST basic and stuff. I immediately put that to one side because it came with loads of games. So my interest in programming came from my love of games and I continued to program games in my spare time, particularly during two weeks of the year, which are Pi week. Pi week is a week-long games programming contest where you are challenged to write a game from scratch in Python on a topic that is given to you at the moment the contest starts. So you have exactly one week to write a game and then you have to upload it. And I've taken part in Pi week about ten times, but I've won it twice. So that kind of background was particularly of interest to Nicholas when he set up the Python UK education track and he kept saying, well, Dan, you have to kind of get involved with this because teachers, we love this kind of stuff. So about four years ago, the first five years ago, the first Python UK education track rolled around and we were challenged, we were put into groups, and challenged to groups where teachers met developers and challenged to come up with some course material that the teachers could teach. And so from my Pi game, or Pi week and Pi game background, I dashed off the simplest possible Pi game program, about 20 lines in it, and one of the teachers said, nope, that's too difficult, that can't work in a classroom because the amount of code that he would have to teach for a student to get something productive by the end of the lesson was too much. So the best programmers in the class, the people who got it would race away and be bored and the people who didn't get it would not have got it by the end of the lesson. So that problem sat around with me in my brain for three or four years. And then in 2014, I think the October 2014 Pi week, I sat down to write a game and the theme was one room. I thought I'm going to write this in a kind of way that I would write it if I was creating a framework for complete beginners to Pi game. And then last year, I turned that into a library which is Pi game zero. So this is a library that takes all of the boilerplate out of Pi game. Pi game is a library for access to graphics and sound and input. But wrapping that with a kind of thin Python layer that lets you training wheels for Pi game, as it were. So you can get up speed faster. The teachers can teach a couple of lines at a time and make sure the class is caught up. But then it is just Pi game underneath. So you can throw away the training wheels at some point and migrate to Pi game proper if you want. So I'm going to show Pi game zero today. I've written a blank file that's called demo.pi. The secret of Pi game zero is that it doesn't run with the standard Python interpreter. You run it with PGZ run. A blank file is a valid program. It creates a blank window. But you can quit it, which is great because you can't do that with a blank Pi game. So that proves you've got things installed. Then you can say, Def draw screen.fill. So two lines of code. And I've got a blue screen. So let's write a little game. I'm going to... Excuse me. Okay. It's... So that's a couple more lines. But that is using Pi game zero magic. Sorry. This mouth refers to an image. So if I do... I've got a directory called images. And that's where my image files are. So I don't need to do any faffing to load those path manipulation. They are just as available as strings. You can also... You can access them as objects in order to get the width and the height if you want. But the actor there has a width and height. It's a Pi game rectangle. I'm going to have another thing. It's going to be very topical. So I've got a pin show. These were drawn in inkscape, by the way. And then exported as pings. And Pi game zero can load pings or JPEGs. So whatever file format you should just be able to save something off the Internet maybe and use it immediately in the game. And then I draw a function called update. I've got a falling pin show. So a couple more lines. So two lines at a time. That's what we're aiming for. So if I say if keyboard.left. I should have chosen a word that's easier to spell. There you go. And then I said these actors are just rets. So I could do if... So I think that program is easy enough to grasp. I think by moving all of the complexity out of the Pi game program that you would write into Pi game zero, we've created something that is much simpler to get started on. Just at that level where you're transitioning from something like scratch. So kids will do scratch up to the age of 10, 9, like some subtle nods. And then in the UK curriculum they have to transition to a textual programming language. But scratch has the ability to create characters and move them around out of the box. So that stuff is very accessible. For Python programming, I think we're in a situation where the out of the box experience is worse. I think if your basic programs are what is your name, hello name, that's a big gap from where you just were in scratch to where you are in Python. So Pi game zero fills that niche for getting up to speed, getting something graphical on the screen to keep kids engaged as their programming career continues. This was written in Python three. I think that's important as well. So the background for Pi game and Python three is sort of a bit incomplete. Teachers told me that they wanted just Python three stuff. They wanted to be able to teach one language. And so the Python two, three split was a big problem. When this was written, and I think this is still the case, there is no official release of Pi game for Python three. So part of this was actually finding ways to install Pi game and Python three. And that is all in the documentation. So there are ways of doing it. It works. But Pi game is catching up, actually. So there are now binary wheels on Pi PI. But I think off of the pre-release tag, so you have to say pip minus minus pre or something. But you can install Pi game. And what else? Okay. So I was going to show you some of the other things that Pi games zero can do. So draw an update or your basic bread and butter for creating games for animation. So draw will be called whenever Pi game zero wants to update. Draw the screen, refresh the screen. Update is called 60 times a second anyway. So if you don't define update, you can create games that are click driven. I'll show you the click API. So all I needed to do was create a function called on mouse down. And Pi game zero will call that function. If I wanted to know what button I, what button was clicked, I could go button. I can do anything with the button. So you see I've got Pi game zero is adapting to the call back that I define. And I was demoing this to, I was sharing this around the internet with some of the teachers who have been involved in the Pi con UK education track. And Dave had said it doesn't work. It's just not working. He had written that. And I was dismayed that the very first experience he had with this tool was something that just didn't work and didn't give me any feedback on why. So if you misspell a function, it's got a spell checker and will tell you that you might have misspelled things. I think that's the kind of philosophy of Pi game zero is that we've done a lot of work actually in catching errors, re-raising them with better messages because that kind of feedback about if something breaks and it doesn't give you any information as to why, that is an obstacle to continuing your learning. So every time we could take something that the underlying Pi game was doing and make it more explicit, we've done that. I think I will stop there and invite questions because the questions have been very good in the previous times I've taken this talk. And I'd rather sort of, I want to know what you want out of this tool. So any questions? Yeah. So I just wanted to ask if there is like an included way for easy publishing of your game, especially for kids. Yeah, so because they won't go into different installation stuff. No. But Pi game zero was created with an understanding of the kind of portability problems that show up on when you distribute games. So having done Pi week a lot of times, every time you use open gel, it will work perfectly on all of the machines you develop on and then somebody will run it and there's a driver problem. So Pi game is ideal for the distribution of games because it just works. If it's there, it just works. It's CPU rendering, which is slow, but it's incredibly reliable. Also, Pi game zero will catch problems with file names, for example. So if I rename... Then it will give me an error that my game could possibly not be exchanged with somebody with a different file system that was case-intensive. So that kind of problem is to the best of my ability dealt with by Pi game zero. The actual packaging of games and distributing them I think is a future problem, but something I think we would like to solve. Why three? Yes, so packaging. Yes, and distribution. Yes, I think that comes later. I think that's something that we should build. The Pi game zero distribution is just like a directory really. So if you have Pi game zero installed, zipping up a directory and just sharing the images directory and the sounds directory and the script, generally it's assumed there is one script. This is for programs that are simple enough to be in one module. Then that can be run on any computer with Pi game zero. There is another project. I've got internet. I can show you. I've been working on something called the edgy bundle, which is an attempt to provide a redistributable bundle for Python for education that has Pi game zero, Pi game, Pi QT, Nicholas's new editor available. So I think by pursuing all of these avenues, we can make games easier for, and Python easier for kids to use at school when at home. What about networking? Because in games, it's also interesting if your colleague from school can play your latest game. Yes. So Nicholas is putting his hand up. So there's network zero for that. We're going to answer more for you. So inspired by Dan's awesome work with Pi game zero. As I mentioned in my keynote, other people have been doing something zero libraries. Following the same philosophy that Dan has, and Tim Golden, who's Python core developer based in London, has created network zero. He tried out at the London Python code, and we had a lot of fun breaking it. Tim has also been trying it out with teachers as well, and getting their feedback. Just to echo what Dan was saying, getting teachers involved in this is essential because they're the experts in dealing with children. As developers, we can think what kids might want to do, but it's teachers who actually deal with them every day. But there's network zero. I can't see why network zero couldn't work with Pi game zero. Ben Nuttall of the Raspberry Pi foundation created GPIO zero, and also the hashtag zero all the things. So what you need to do is create a Pi game zero project that uses GPIO zero on a Pi zero, Raspberry Pi zero with network zero as well. Then you've zeroed all the things, and you can legitimately use that hashtag. Hey, so we talked yesterday evening, and I just want to repeat this suggestion. You can actually bundle this thing with Pi to X to have a single executable file that doesn't need installation. I think that would be a good thing to have, but I have not written that yet. Yeah, of course, but I'm just saying that maybe that's a good thing to look into. And also you could run the zip files directly, right? Yes, like zip app, yeah. Thank you. Sorry, that wasn't really a question. Yeah, so I guess as you suggest, features, I should mention that this is on Bitbucket zero, and so pull requests are accepted, and this is a community project, and it sort of relies on the feedback. I'm not a teacher. I am going on feedback from teachers, but I need the feedback of people who have tried teaching kids with this to improve it. And any time that you see an error message that is opaque or something doesn't work and doesn't give a trace back or doesn't give any indication as to why it's not working, that could be considered a bug. So please report it if nothing else. Okay, another question. So since arriving in Bilbao, where am I looking? Sorry. Hello. Ah, right, right at the back, okay, right underneath the light. Since arriving at Bilbao, I've discovered that my Spanish is exactly zero, and a lot of Spanish people's English is also zero. What options are there for internationalising so that an 11-year-old Spanish kid, for example, doesn't have to learn English to do on mouse down, et cetera? Yeah, I think it's probably difficult to conceive of a way that you could internationalise this without making incompatibility problems. I think also that English is the language of programming, and the Python libraries and the keywords are English. On the other hand, I think it's very reasonable that the documentation should be translated. So if anybody would like to contribute a translation for the documentation or contribute any kind of tutorial or blog post, that would be appreciated. Okay, the last question. Thanks very much. This looks like it would be quite a nice way of building not just games, but generic interfaces for interaction with all kinds of things. How suited is it to play the role of a kind of generic graphical interface builder? You can write full graphical interfaces in PyGame, and people have done it, but I think at that point you're probably best if you want to attack that kind of problem using PyGame itself. There are libraries that do the GUI widgets for embedding in a PyGame game that mimic the platform widgets, but I think it's not... That kind of programming becomes more complicated than PyGame zero is targeting. So I think if you want to do that... I certainly could see that if PyGame zero was to include some GUI tools to share games, for example, to bundle them up to enter some details and an icon or something, then that could be done with a GUI that was written in PyGame, but may not use PyGame zero to do it. Okay. Thank you, Daniel, for your interest, companies. Thank you very much. Thank you.
Daniel Pope - Pygame Zero Pygame Zero is a new game engine for education, built on top of Pygame. It makes writing your first games extremely simple, while saving beginners from certain potential pitfalls. Daniel will introduce Pygame Zero, walk through creating a simple game, and discuss the background for Python in education and the design philosophy behind Pygame Zero. ----- Pygame Zero is a new game engine for education, built on top of Pygame. It makes writing your first games extremely simple, while saving beginners from certain potential pitfalls. This talk will introduce Pygame Zero, walk through creating a simple game, and discuss the background for Python in education and the design philosophy behind Pygame Zero. Pygame is a powerful set of libraries for graphics, sound, input and more. But it is just a library: each program needs to import and set up the libraries, implement a game loop and load resources among numerous other concerns. While seasoned Pythonistas have no trouble with this, teachers told us that they found it difficult to teach with Pygame. There is simply too much boilerplate involved, and getting students to reproduce the boilerplate perfectly before useful lessons can begin takes too much time out of a 40-minute lesson. Pygame Zero is simple enough that a lesson can be broken down into bitesize steps where meaningful progress can be made with just a couple of lines of code at a time.
10.5446/21121 (DOI)
Hi, everyone. Thanks for coming to hear me again, if that's again for you. I'll quickly tell you about me. I'm Daniele Prachida. I work for DIVIO. I'm a community manager at DIVIO. I'm a Django CMS developer and a core developer also of the Django project and a board member of the Django software foundation. There's my email address. You can find me as evilDMP on IRC and GitHub and so on. So, some people take quite seriously this idea that you should write your documentation first and then the code should follow. It's something that's not discussed or practiced nearly as much as test-driven development. In fact, I don't actually know anyone who really does documentation-driven development or even talks about it. I don't want to spend too much time on this, but documentation-driven development is the idea that reverses the typical priority of code and documentation. You start with the documentation instead of the code, and instead of documenting the code, you code the documentation. It's rather like test-driven development in that it puts what should be the case before what is actually the case. It helps establish a shared and easily accessible, higher-level overview of the work at hand. It provides a shared and easily accessible metric of success for the work, which is important. It encourages the contribution and engagement of non-programmers, and it binds the programming effort into a coherent narrative. The honest truth is that I don't know very much more than this about documentation-driven development. I'm sure it's a valuable development practice that more people should adopt. In fact, what I want to talk about is not this exactly, but some other senses in which documentation drives development. I want to have a look at Django and the Django project and consider what documentation has meant for Django's development. The first thing we should say is, and it's not because this is a new observation, but simply because everybody seems to agree on it, is that Django's documentation is exemplary, and I've not come across any other similar project, possibly any other project at all, with better documentation than Django. Perhaps my experience is just a bit limited, but I don't think anybody else seems to have come up with a project with better documentation than Django has. What is so good about Django's documentation is that it is structured properly, so it is structured into a very clear division between tutorials, how-to's, reference material and topics, and I'll discuss the implications of that a bit later, but the structure matters a lot. Within the structure, it is very clear and consistent. It's comprehensive, it covers just about everything. All of it is held to very high standards, so you need to work as hard on documentation as on code if you want to get it accepted into Django itself. It exemplifies important values such as clarity, courtesy, friendliness, and finally, documentation in Django is a process rather than a product, and again, that's something else I want to talk about particularly later. All these are the things that actually make the documentation good as documentation, and the difference is it makes the effects it has are important for the project. It makes Django easier for people to learn and adopt. It makes those people who do adopt Django into better programmers. It lowers the support burden on people who are supporting new programmers or programmers who are trying new things or need help, and it also makes the development of Django itself easier and faster. In other words, Django's documentation is very good for Django. Without any doubt, it has been good for, it has been part of what has improved Django over the years. Now we come to the main point of this talk, and that is that software is not the only thing that develops and grows and improves. Programmers and communities of programmers also develop and grow and improve, and it's programmers and communities of programmers that make the software grow and develop. What does documentation mean for the development of communities and programmers? Why does documentation matter to them? Let's think about the Django community. Django's community, like the software, is stable and mature and dependable. You know where you are with Django's community. The community is active, it's engaged, and it's remarkably united, given that it's so large and is used by so many different people in so many different ways and places. Like all communities, it has its difficulties sometimes, but there are very rarely in Django crises that afflict some other communities, and we don't see in Django these lingering ills that seem to blight some other communities. I think that one of the glues that has bound this community together is in fact Django's documentation. I think that when it comes to the development of the community, Django's documentation does four very important things. I said earlier that it represents the attitudes of the community, but it's stronger than that. Django's documentation, or the care that Django's documentation takes, is an implicit contract that it makes with its community. It's a commitment to standards of communication and information, and it treats documentation as an activity and not just as content. This last is, I think, the deepest of all these points, the most important, so I'm going to start with that one. Everybody learned programming who's a programmer. Have you ever had this experience of hesitating to ask for help with a programming question, perhaps on IRC or somewhere online? Because you felt that the answer must be out there already if only you knew how to find it. Sorry, your hand's up because you had that experience. I thought it was a signal that I was misinterpreting. That would be interesting. Who has had this experience of hesitating to ask for help online because you felt the answer was probably already out there, if only you knew how to find it or search for it or ask for it, and you were a bit anxious that you might be invited by somebody who was a bit irritated to read the fucking manual, by somebody who thought you were being lazy or stupid. I think most people have had that experience. You might be right to hesitate because people who already know things can be remarkably forgetful about how they learnt them or about what it was like not to know them. People are not always sympathetic and friendly to people who don't yet know the things that they do. To the extent that programmers are young men full of confidence, they're not always the most empathetic group of people in the world and the ones who are most easily able to understand other people's position, even if it was a position that they themselves held. Perhaps you've asked a question, and has anyone had this? You've asked a question and someone's replied with a link to this sarcastic website. You put your search into it and it does a sarcastic simulation of somebody searching for something on Google. I really hate, I did test that website. I really hate everything it stands for because what it stands for is putting people down when they have a question to ask. Thank you. It stands for putting people down who are asking for help in understanding something. It means telling them that the reason they don't understand the thing that they want to know is that they're too stupid or lazy to learn it. Insofar as information and documentation are content, it's possible to have to think and respond like that. The content is out there, especially in Django, where the content, if you're asking a question about Django on IRC or something, probably the answer is in the documentation. The temptation might be quite high for someone to say, let me Google that for you because it's there. We know it's not quite so easy because we don't always know what we're looking for. If you think of information and documentation just as content, you can say, well, the content's there. It's freely available to anybody with an internet connection. Go ahead, help yourself. If you can't find it, if you don't find it, that's your problem. Other people managed. If we think about information and documentation differently, if we think of them as processes or activities, then that kind of let me Google that for you sarcastic response is much less possible. We do see that response much less in Django than I've seen in other programming communities. Our IRC channels and email lists are very friendly places, and the experts of the community who are there regard information and documentation as activities that they are engaging in, not as stuff that people should have read before wasting other people's time. So they consider that information is something that they do. Documentation is something that they do. Not something, some stuff that exists out there. So information in this model is regarded as a communicative transaction between agents. Information on this model demands that we respect values of clarity is what I'm saying clear, intelligibility. Am I saying it in a way that the other person can make sense of it? Relevance is what I'm saying actually relevant to the problem that the person talking to me has. Comprehension, can the other person understand what I'm saying in answer to their question? Attention to the needs and abilities of the other party. Do I speak in the same way to a complete beginner who doesn't even know how to answer the question in the ideal way? That I would answer to somebody who's very experienced. Affirmation of mutual understanding. Have I checked that you have understood my answer to your question? So to the extent that we regard information as a communicative transaction between me and you, between me and the person I'm trying to help, I cannot pretend that telling them to read the fucking manual is informing them. That's not informing them because it's missing out all of that. And sarcastically googling for them misses all of those points. So good documentation shows respect. And there's a default position that if someone doesn't understand the documentation, then the problem lies as much in the documentation, or in the documentation, rather than with the person who struggled to understand it. It becomes an expression of those values. So Django's documentation sets standards and expectations and the tone for communication, especially with communication with the less expert users of Django. Its documentation is an assertion of values that are subsequently asserted across and reflected the whole Django community. What I think this means is that within Django in this community, when I say Django, actually I include a lot of Python too, because it's most obvious in Django, but you do see it in Python in ways that you don't see in other programming communities. We think of information as the activity of informing, something we do rather than as a collection of content. And this idea of information has had real, meaningful, beneficial consequences for people who use Python and Django. So I said earlier that Django's documentation has been good for Django, but I think it's much more than merely good. Django's documentation informs its community, and to inform something means, literally, to shape it, to inform, to press a shape into something. So Django's documentation has literally informed, shaped the Django community. It has determined how the community has developed, what sort of thing it has developed into, and it's one of the things that continues to shape and drive the development of Django. And that's the kind of development I'm speaking of when I'm speaking of documentation-driven development, because documentation is one of the things that determines the way Django is growing and developing. I want to take a little digression here. So the attitude towards documentation in Django has had a tangible difference that you can experience yourself, not just when you're thinking about it for writing talks and analysing it, but in an ordinary, everyday way. And it's brought to me quite forcefully sometimes, especially when I speak to people outside our cosy world of Django. I have an interesting experience. If I tell people that I'm a member of the Django core development team, then if they know what Django is, they're impressed, and they're like, oh, you're a member of the Django core development team, that's really cool. And then they say, well, what do you work on? And I say documentation, mainly. Or I might tell them part of my job title is Documentation Manager. And other programmers from outside the Python Django community sometimes find it a little bit hard to hide their sudden disappointment, or even their embarrassment. Embarrassment for me. Documentation. It's like a moment ago they thought they were being introduced to Superman, and it turns out it's only Clark Kent after all. And sometimes you see a flick of an expression on their faces, and you think, is this a joke in bad taste that this person is admitting to this? And I swear that once or twice I've had the impression that somebody was about to say, isn't that woman's job? So I'm really serious about this. It's something that I encounter in some funny ways. And it's honestly, as though I were admitting to having an unmanly personal habit, like crying or whining a lot, or writing a function when I should have written a class, or using the wrong kind of workflow on Git. And I mean, it's true that I do all of those things, especially the crying and whining. But to have people feel embarrassed for me and start to look desperately around the room for somebody more interesting to talk to, because I admitted to them that what I mainly do is write documentation, tells you an awful lot about where it fits into some people's picture of the world. When I tell somebody from the Python or Django communities that my main role is in contributing to documentation, their reaction typically is, oh, cool. So there's a real cultural difference between these communities. I even saw, I was surprised, I saw it in some of the literature from, I think some of our sponsors of this conference that they were looking for ninjas and rock stars. And I'm really surprised to find that here. So what sort of attitudes would you expect people in programming communities to have towards documentation when so many of them think that they should aspire to be ninjas and rock stars? Where did this come from? Why do companies think they might benefit in some way from employing rock stars or ninjas? So ninjas are mainly famous for setting fire to buildings in the dead of night. And that's a very interesting metaphor in which to base your recruitment strategy. Or the notion of employing rock stars, the mind boggles, frankly. What's the defining characteristic of a rock star? Excessive behaviours, unreasonable demands, an expectation that maybe underage girls with self-esteem issues might be willing or should be willing to offer them sexual favours. The whole thing seems unbelievably mature. And astounding when you see it coming out of not just a startup being run by young men who themselves have barely stopped being teenage boys, but corporations. You think maybe airline pilots would be a more appropriate metaphor, or surgeons or chefs or anything at all in which being disciplined, thoughtful and highly skilled or being able to collaborate with other people well to achieve good outcomes would be a better metaphor for excellent programmers, but no. We have to be ninjas and rock stars. On one side, arsonists, on the other, actual arsholes. So it's childish and immature. It does affect the community. It does affect the way people think. It does affect the way those parts of the industry or where such attitudes are allowed to dominate, it affects the way all those things work. And it makes them worse, and it makes those companies and projects worse places to be. So I'm going to come out of my digression now, back to black. So genuinely, I think that Django's documentation has been part of what has helped Django avoid this kind of blight. It has informed, it has shaped the Django community into being a better place. It has developed it into better ways. And documentation has implications for programmers as individuals, as practitioners in similar ways. Developers develop, which is to say that their programming skills develop, get better. And the question for many programmers is how do I develop? How do I become a better programmer? And it's a true but quite uninteresting observation that good Django documentation helps Django programmers write better code. But again, I think the implications are wider than that. So, not just in Django, but everywhere, documentation is an excellent way for newcomers to something to start contributing to it, to open source software especially. You don't need to be an expert in something to be able to identify something unclear or lacking in its documentation and to suggest a way to improve it. Writing documentation represents a really good and easy first step from being a user of something to being an active contributor to it. Especially in open source projects, documentation is almost always welcome. In fact, in most projects, it's not so much welcome as desperately needed, and you'll find that it's far easier to get documentation accepted into an open source project than it is to get code accepted into it, simply because the documentation is more badly needed than a new feature is. Of course, explaining something to someone else is just about the best possible way to explain it to oneself to learn and understand it. So, if part of a programmer's development is to contribute and understand more, documentation is an ideal way to do it. It will raise the contributors' understanding of the whole of it. The Django project really does get all of this. It really does understand all of this. I talked earlier about structure. Django's documentation structure does an excellent job of guiding and encouraging new contributions and new contributors. In Django, the clarity of the documentation structure makes it almost obvious how and what to write for a particular section, just as well-written code does. This works very well for maybe huge new or large new contributions, maybe an entire new section of the tutorial, or for tiny ones such as an aspect of some key function that deserves more explanation. So, it invites the person in the new contributor and shows them where to go and how to do what they want to do. Contributions to Django's documentation are taken very seriously and they're held to the higher standards. Contributions to documentation and the contributors who make them receive as much support and encouragement and engagement as those who are contributing code. In many other projects, documentation will be accepted just because it's documentation, and the quality of it is not always an issue. In Django, documentation goes through the same ringer that code goes through. You'll find if you're submitting documentation to Django that there's somebody saying, well, no, you need to change this, who will send it back to you and be prepared to sit there and go through a long review process of many cycles to help you get the documentation just right for Django. So, you, as a documentation contributor, will be taken as seriously as if you were contributing some key code. It won't be accepted just because it is documentation. The other thing is that these contributions are valued and recognized by the Django project and the Django community. So, everybody who's on the Django core team has made substantial contributions to Django. My contributions have pretty much all been documentation contributions, not code. So, there are these three things that Django gets very right. The documentation guides its contributors, it's taken seriously, contributors are valued highly, and they're important because of this fact that most people say they recognize, and then in the end few people act as though they really believe it, that documenting code is the best possible way to understand it. Documenting code will make you a programmer who understands more things and understands them more deeply, and Django encourages developers to write documentation. All of this means that Django not only gets more and better contributions to its documentation than other projects do, it's also very successful in advancing the skills of those who contribute to it. If you want to learn how to contribute to open source software in Python, there is probably no single better place to start than by looking at the Django documentation and thinking, could you make a contribution there? Because you will get a first class introduction in how to contribute to major project. Your work will be checked by somebody who is extremely thorough about it. You will be guided and engaged with it at every step of the way. So, if you want to start being an active contributor to open source software in Python, probably there isn't anywhere better to do it. So this is what I mean when I say that Django does documentation-driven development. And that through its documentation it develops, it advances, it improves both its community and its developers. So that's the lesson from the Django project. What can you do? What can your project do to reap some of the same rewards from its documentation? Well, I don't think it's something that can be accomplished overnight. Much of this is to do with attitudes, and attitudes are very hard things to change. At the same time, the actual steps you can take are easy, and if you keep taking them and taking them in the right direction, eventually, not at first but in time, attitudes will follow those steps. So some practical things, just very briefly, I mean, I'd be happy to talk about this more. One is to structure documentation correctly. So if you have a look at the first page of the Django documentation, it explains this. It doesn't get those four categories mixed up, tutorials, how-to's, reference or topics. And most documentation does. A tutorial takes somebody by the hand from not necessarily zero, but from a known low starting point to a position of success. So here's one task. It might be just to set up a Django project. We'll get you to the end of it. If you follow these steps, you're pretty much guaranteed success at the end. That's completely different from a how-to, which is like a recipe in a cookbook, which requires that you understand at least the basics of how to use the kitchen and the tools. Reference material shouldn't need to tell anybody how to do anything. It just describes what is there. And topics are discursive material that they describe stuff. They don't tell you how to do something. They don't list the bits and pieces of an API. They don't lead you step-by-step through anything, but they will give you a higher level overview and understanding of it. I'll happily talk about that some more if anyone's interested. Make your documentation policies as rigorous as your code policies. Don't be afraid to bat documentation back at its contributors. Just because somebody has submitted documentation, don't accept it. Take it seriously as you take code and ask for it to be improved if it needs to be improved. You wouldn't accept some substandard code, so don't do the same for documentation. People appreciate being taken seriously. Substandard code can harm your applications, but substandard documentation can harm your community, which is worse. Document your documentation. Make sure it's clear what its policies are, what you want from each section of it, what you expect of the contributors to it. Value your documentation contributors, whether they're internal or external. Recognise them publicly. Make them a core developer of your project if they've contributed enough. Value the activities of documentation and information and set aside time for them. If you or your project or your company are really serious about this, there are some commitments that you can make. You can make being a documentation manager part of someone's role. You could pay someone to have that responsibility. You can spend money and time on documentation. All of this doing that will certainly help achieve the things on this page. Right here in the Python... Who uses Read the Docs, by the way? I should say, does anybody here not use Read the Docs? There you are, I want. Right here in the Python community, we have one of the most important and valuable resources anywhere in the whole of the open source world of programming. Read the Docs. It's free. It works brilliantly. It runs on Python. It's cruelly, scandalously underfunded. It's kept alive by Eric Holshar in the middle of the night, answering calls from his pager for almost no money. There have been times in the past where he was losing money to keep Read the Docs afloat. The company I work for, Divo, we're one of the sponsors of the Read the Docs websites. If you care about this kind of thing, then a small amount of money from your company would go an extremely long way. Alongside the Read the Docs, part of the same family of activities, there's the Write the Docs conference in Europe and the US. It's an absolutely brilliant, really interesting conference. Really interesting to notice that in the Write the Docs conference, about half the speakers were technical writers. Most of the developers who were at the conference were from the Python Django communities. Again, reflecting the attitude, the esteem in which documentation is held in our communities. They also have meetups to have a look at Write the Docs.org. If I would like you to take away one thing from the last 35 minutes or so, it's this. That information and documentation are activities that we engage in. And they are not stuff that we produce or consume. Thank you. APPLAUSE OK, we have time for some questions. Thank you for the talk, very inspiring. I have a more pragmatic question. I don't use Django that much. Actually, I almost don't use it. But you probably have a lot of documentation, right? So how do you keep the documentation in sync with the code? For example, I've heard people saying that doc strings are doomed to fail because they will always be synchronised. So how do you do it? Partly by making documentation as important as the code. So if you changed a part of the code and it affected another part of the code, you wouldn't change only one part and then leave the other part, you know, something that used that API. You wouldn't leave that in a broken state, would you? So in the same way, why would you want to leave the documentation in a broken state? And the developers of Django will just send it back, say, you know, you've changed the code, but what about the documentation? Just like you wouldn't, if it made tests fail, you would... But is it always obvious that if a developer changes a bit of code that he has to change the documentation? No, nothing is obvious. But if you have a whole community who spend a lot of time thinking about the documentation, it might not be obvious, but at least there are a lot of people who are taking an interest in it, and it's much harder for it to slip through the net. So, you know, I would never pretend that any of these things are easy. But when you've got a lot of people who care about something, it certainly helps. Of course. One question over there. It's not a question, it's just an answer following that Danielle. It's actually a rule in Django project to fix, to get the tests going and to fix the documentation with the patch, so otherwise it will be refused. Thank you. Another question. Kind of tying into the previous question as well. I think doc tests are a good tool to keep the docs in sync with the code. Can you explain what you mean by doc tests, please? There's this thing in Python or also like support and pie tests where you can write examples in a doc string and it tests those examples. And I'm wondering if you know how well it works out in practice because... I've never used that and I think it's really important for documentation to be curated by humans. So, when you go to... This is the kind of... Reference is what programmers like writing. They love writing reference materials, okay? Because they're describing, oh, here's the machine I made, here are all its parts. And you can put all of the descriptions of that machinery into the machinery and have it automatically generated. But it's only useful for people who already have a general understanding of what the thing is. It's not going to be any use for anybody who needs a description or a kind of topic description who needs to know how to use it, a kind of general recipe. And certainly won't help a beginner who's looking for a tutorial. So, I guess they may have some place somewhere, but for the vast... Paul Ruland over there is shaking his head. He says they have no function anyway. I don't know about that, but I know that for the most of the documentation, you're writing for humans, so a human being needs to write it. Thank you. Okay. I have a question over there. Have you heard of Jupiter? Sorry, that's just a joke. The question is with developers who maybe don't know English so well, just say their level is maybe B2 or something like this, how do you... Do you have anything which can help them? Because I found a lot of people would rather just not write documentation in those cases. They might write really good code, but... Yes. Firstly, even if people don't speak very good English, they probably learnt it before they learnt Python, even if they're not a native speaker. But secondly, they can be some of the most useful contributors to documentation. Because if you're writing for somebody whose first language is not English, then you'd better write your English in ways that can be understood really well. And those people will be the best people to tell you that your documentation is not clear. Your documentation has to be clear enough for those people, and if they can contribute to it, they may not be the greatest stylists, but they will have some of the simplest language and give you the best way to write it. And give you directions for the way that language needs to go in order to be comprehensible. So this is understanding the other people who are reading your documentation as part of the process too. It's not just a question of, you know, here's the documentation, understand it if you can. There's this relationship between the two, so they're an important part of that. Thanks very much. It's a good talk. I've got one statement firstly. I really like the Django Girls documentation because it kind of starts at ground zero. There's no presumptions about anything that the users shouldn't have. Also, I've really appreciated the Plane documentation has improved massively over the last few years. But what I've noticed before working for the last ten years with Python is when you deal with people outside of the Python community, you often felt to be a lesser developer when you say you do document because you're not spending your time say programming. I was wondering, do you have any advice, because I kind of feel like it has to be a top-down thing in these organisations, but do you have like one bit of advice how you can kind of combat that? If you're combat, maybe ninjas should be sent into combat. Attitudes are the hardest thing to change. Python and Django are successful partly because of their documentation. It's one of the reasons that so many people are coming in. If we can keep people coming in, then we will be changing the culture just by having people part of our community and inculcating them in the ways of our community. Other than that, if somebody gives you a look in horror because you write documentation at a party or a meet-up, you just have to live with it. What can you do? Doesn't Django have documentation in languages other than English as well? Yes, it does. I think the documentation is translated into French completely. I think somebody is translating it into Spanish at the moment. Translation is a hard job. I was wondering about that because of the previous comments about languages and stuff. The person who asked me about non-native English speakers, another option is to translate all the documentation into another language. That really is a labour of love. Nobody got rich or famous by translating open source documentation into another language. Do we have any more questions? Thank you for your great talk. You really speak to my heart regarding documentation. I think it's very important and should be treated as a first-class citizen among ourselves. That's more feedback than a question. Thanks for your talk. Thank you very much. Thank you very much for your time. If you'd like to talk to me about any of this, come and find me. I'll be very happy to talk about documentation and give practical suggestions about it. One last thing, if I may briefly, I'm not a Django wagtail or Django developer, but if you are, I've got some exciting news for you. Come and find me afterwards and talk to me. Thanks once again for listening to me.
Daniele Procida - Documentation-driven development One secret of Django's success is the quality of its documentation. As well as being key to the quality of the code itself, it has helped drive the development of Django as a community project, and even the professional development of programmers who adopt Django. I'll discuss how Django has achieved it, and how any project can easily win the same benefits. ----- Part of my job title is _Documentation Manager_. When I explain this to a programmer outside the Python/Django community, the reaction can be anything from bewilderment to a kind of mild horror. When I mention it to a Python/Django programmer, the response is usually: _Oh, cool_. In fact, one secret of Django's success is the quality of its documentation, and everyone who uses Django is quick to note this. The returns on Django's investment have been substantial, but some of them are also surprising. The documentation has clearly been key to the _quality of the code itself_, but also (less obviously) to the _development of Django as a community project_, and even the _professional development of programmers_ who adopt Django. I'll discuss how Django has achieved it, and how any project can easily win the same benefits.
10.5446/21123 (DOI)
Hi, good morning. Thank you for joining us in this first session in the pie charm room Our first speaker is David Arcos and his talk is titled efficient Django Thanks for coming In this talk I will speak about efficient Django I will tell some tips and best practices for avoiding scalability issues and performance bottlenecks, okay The forming things that we will see are the theory the basic concepts then measuring how to find bottlenecks and Finally some tips and tricks the conclusion of course is that Django can scale So hi, that's me. I'm David Arcos. I'm a Python developer since 2008 I am co-organizer at Python Barcelona at the meetup. I'm a CTO of lead ratings The ratings is a start in Barcelona That does machine learning as a service. So we provide a prediction API So our customers can rate their sales leads and then improve their sales conversions Looks difficult, but it's quite straight Okay, let's start with the basic concepts Have you heard of the Pareto principle the 8020 rule It says that for most of the for many events Most of the effects come from a few of the causes and this happens in in many different fields In scalability this happens too Okay, we can focus on optimizing 80% of the task and achieve a very few results Or just focus on a few vital tasks the 20% and We will achieve most of the results The difficult thing here of course is to identify the these few tasks So if we want to improve the performance and the scalability of our platform we need to Identify the bottlenecks Basic concepts on scalability usually Scalability is defined as the potential to grow a system just by adding more hardware Without changing the architecture. Okay It's recommended that you don't store the state in the application servers, but on the database If you keep stateless app servers, you can do load balancing and then you can scale them horizontally Which means just add more hardware and if the state is not shared, it's very easy to grow But then we move the problem to the other side the database If the state is in a single point in the database This will be difficult to scale it depends on the database. It's not the same Scaling a mongo a postgres redis each of them have different things To improve the database performance, this is quite obvious on one hand You have to do less requests and on the other one you have to do faster more efficient requests We will see how to later and Doing less requests means that you have to do less reads and less writes you can achieve this with caches and Doing faster requests you can do many things here. We will see how to index fields and you can denormalize your models They are normalizing means that you have some Precalculated data inside the model so you don't have to do expensive operations all the time About the Django templates the standard templating engine is good enough Ginger is a bit better, but anyways you have to cache all the all the templates Django has fragmentation That means that you can catch just little blocks of the templates You don't need to catch everything at the same time and you can go layer by layer template by template and Do different caching at different spots Of course this depends on your system if you are doing an API you don't have templates Okay, but if you are doing a normal web application you will have a lot of code that can benefit from this The cache this is one of the most important things of course you can cache almost everything So the the most standard approach is to go Layer by layer of your stack and try catching things. Okay from from the top if you are using the If you are using varnish if you are using a CDN platform The access to the database the templates sessions everything Django has very good cache documentation and it's very powerful And the problem here is the cache invalidation. How do you? Invalidate the cache once a model is updated you have to remove it you can do it in in in many different ways We will see how later So cache everything bottlenecks now we are moving to the interesting parts You have to identify the bottleneck on your system The bottleneck is the place that makes your system slow if you remove a bottleneck your system will go faster Then you will have another bottleneck. You have to identify that other bottleneck Solve it and and rinse and repeat. Okay, it depends a lot Different systems will have different bottlenecks if you your bottleneck is the CPU the memory the database you can do Different things the thing is that first you have to fix the the current bottleneck and then move forward to the next one So how do we find the bottlenecks? Okay second part measuring you can monitor your application See data numbers and this can help you to find the bottleneck As they say you can't improve what you don't measure So you measure your system to find the bottlenecks you fix those bottlenecks and then you verify because you are measuring You verify that the bottleneck has been fixed and you keep doing this until it's efficient and performant and scalable easy to say So from the from top to down monitoring you can monitor the the system loads CPU memory to Check the the basic stats the database of course. It's very important What is per second response response time the size of the database even same for cache The queue when you have a system of workers. It's important to see how many tasks do you have cute? If it's going to If it's going too fast, then the bottleneck could be there and also custom metrics for your application You can do profiling with the Python C profile module, which is the standard module for profiling and Profiling allows you to run the Python code and it will return you Some numbers like this the number of calls that that goes in in in each call Running time time per call these numbers are interesting For finding which is the slow Call the slow line and which lines are being repeated the most Because you can have an idea in your head on how the application is performing but until you measure it's just a hypothesis Time it the time it module is another standard Python module that That's what it says it times how much time does it take to run your your command? So you can use it to call ascript or you can imagine how much time it takes to run your To call ascript or you can embed it into into Python code Here it's calling just a method and time it runs this snippet many times Calculates the average the best and and well this kind of metrics. So the idea here. It says best of three Usually as a baseline you want to to use the best possible time because in your system You have many different variables and the best time is when you have the cache Repopulated is when the CPU is not doing other things is when you are not having network problems So the best measure works okay for for knowing a lower bound of your system IPDB PDB is the Python debugger. So if you are using ipython IPDB is the same for like ipython So it has a few more features like better top complex Completion as in text highlighting more tracebacks in inspection You just use IPDB that set phrase and then when your code goes over there it will stop It will give you a shell to keep executing Python. Okay, so from a normal Young application example that you are running in your in your machine. You just put a trace back here a break point Sorry a break point and then the the run server will stop and you can see all the variables that are there you can keep running you have a few commands to continue to Go step by step and this is very useful because when you detect a bug You can just raise this and and check it no need to go through the tracebacks Another very important tool the Django the book toolbar Django the book toolbar consists on a series of panels in this in those panels You can check things about about everything and you can add more panels. Okay, so you can do the line the profiling here You can see the SQL queries You can select to explain the queries you can see what's in in the system right now You can see how much time it takes Also things about redirections about the templates about the catch usage And this for me This is the most useful for for the bugging things because when you have a Theory a hypothesis on how your system is working But then the numbers don't make sense you can go line by line view by view and check really what's happening It's very modular so you can add more modules first the Django the book toolbar line profiler embeds the Python profiler So you have a new toolbar panel and and then you can provide the views the models everything. It's very useful And then Django the book panel not Django the book toolbar but panel This is an extension for the Chrome browser because some calls don't return HTML if we go back This is in in this picture we can see this is the result of a single page of your application Then you click on a button that says Django the book toolbar and it opens all of this Okay, but this is an HTML view and all of this is HTML and give a script But sometimes we are not using HTML you are doing an API or Ajax request or non-html Responses, you're returning a dynamic JSON whatever in those cases you cannot embed the HTML Inside the view so the Django the book panel allows you to use the browser You have this little extension and you can check all the into the server you can check the The same things as if you had the Django the book this is very useful to Okay Tips and tricks now that we know the basic concepts and and how to measure and how to find the bottlenecks We will see a few best practices and a few possibilities on how to fix performance bottlenecks, okay So first the most important databases databases are usually slow because The indexes are wrong Indexing a database. Well, it's an index. It makes your queries faster But you need to have the right indexes databases are not as intelligent as they seem You need to be very specific on what you want to index. So in example All the time the primary key will be indexed, okay, but then you cannot indexes for single fields the dv index or Composed indexes for more than one field index together. The first one is defined in the model in that field Yes, a dv index equal true And the index together is defined at the at the meta of the model and then you there put a race of Many fields. Okay, so in example Yeah, so you can see this this happened to me a few days ago You can have your idea on how it's working But then it's slow you think it's using an index because it's a very simple query. Okay, you are using a Daytime field so you are ordering a list of a list of rows by date, but it's very slow What's happening? If you use the the book toolbar or any other of the other toolbars You will see at some point that the problem was in postgres in my case. It was not using the the index why thanks to the book toolbar I found that it was a Multiple index it was indexed by creation time and you you ID Okay, why no idea this was inside the Django admin so I understand that it in the index it shorts By time and by you ID, but once I found it fixing it was just adding an index and it went from 15 seconds to 3 milliseconds The the difference is huge and this table was very small very small just 3 and a half million rows So for bigger tables, it's very important to be sure that you are using indexes for your most used queries And also if you are using the the Django admin, of course What's the bad thing about the indexes why don't we add indexes to everything why it's not automatic to to have indexes everywhere indexes occupy space Space is cheap but space on the database well It's it's problematic and also having indexes make slower writes because if you insert a row it has to update all the indexes Okay, if you have two indexes, it's okay if you have 20 set 20 indexes It will get more complicated and you can do permutation for multiple indexes of many many fields and it will get slow very fast So use the same indexes Only when you need them to and be sure to profile and to be sure that it's using the right index The difference is huge. It's very very easy to see that it's working as expected Okay another tip for the databases doing bulk operations In the example if you have to do an initial ingest of the indexes, you have to do a bulk operation Probably you have to do an initial ingest of data and you have Thousands of rows and you go one by one. It will be thousands of writes to the database. Okay, you can use the bulk create method and do bulk Insertions of I don't know a thousand at some time or ten thousand at the same time This goes much faster The database has no problem in adding ten thousand rows at the same time It's just a bit slower, but the difference in number of queries is huge Each query you do to the database has an overhead of going to the remote database and everything Sometimes you you test in your laptop and it goes very fast But once it's in Amazon or it's in another provider it You will see the overhead very fast So you can do bulk operations for creating you can do bulk updates and you can do bulk deletes Okay, instead of iterating over all the objects or the models or the rows you can use these these methods Update is a bit more complex. Why because usually when you want to update a field Okay, you know what you want to put into that field But if you want to update a query set of many fields, usually the field you want to update is dynamic Okay, because setting the same value for for all the fields That's not a common use case. So you can use the the F expressions That are for setting field values based on dynamic data dynamic data I mean things that are already in the database an example you want to increase a counter So you could use an F expression to say, okay, give me the counter plus These kind of things by the way, these are links and I will post the slides and you can check all the links Most of them are going to the Django Documentation but others are going to other resources. Okay, and delete is very easy. No No parameters. You just delete a full query set in a single operation Another another thing to take in mind is that when you do create when you do bulk create It's not using the safe method. It's not using the signals same for update If your logic depends on on Django signals on a given model to do Something it's time you at the row this will not call the signals. Okay, so you have to manage that apart Okay, another tip for the database Getting related objects within the same query Here we have two different use cases. Well, they are very similar if you go to foreign keys of you Or if you go a foreign keys fields or many too many, okay for foreign keys, it's it's easier You just use the select related method of the quayset And you will have one model and an older really one object and all the related objects in the same quayset So in example, I want to get the model country and all the cities in that country So normally I would use one query for the country and then one per city. That's inefficient I can do it with a single query and tell the database get me this Country and all the cities at the same time. It's a bit slower than a single query, but much faster than doing n queries Okay And the second one is a bit more complex is for many too many fields when the relationship is not only a foreign key But you have more fields This does an extra query before the normal query this will do an extra query this will get all the IDs of all the related objects And it will do the join in Python This is important because sometimes the databases are very slow doing joins if you have if you don't have the indexes or if it doesn't fit in memory it has to go to The file system or whatever this make sure that you will get all the related many too many objects With just an extra query so you will do two queries instead of and Next Slow admin I use the young admin a lot. I usually extend the admin and not And one thing I like is that the default values for the admin well It's not a lot of fields. It doesn't grow very well You can do many of the tips we have seen an example least select related will do the select related thing inside the inside the Model admin, okay, you can do overwrite the get query set to the prefetch related So the get crazy method you just extend it and call with prefetch related to whatever fields you you need Ordering the ordering field make sure that it's using an index and the same for the search fields If you are doing searches on an indexed fields, it will be very slow Now for for for key and many too many fields and you can do to think with only means that instead of in a sample We have a list of all the cities. There are thousands of cities And and this means that it has to an extra query to the database to get all the all the cities and render it And you will have a select box with a lot of things. It will be slow Not on the database part, but on your machine the browser will get very slow So if you do read only fields, it will not be a select field. It will not be editable So you will have just the the current value and this can be useful because most of the times in the admin You are not changing these kind of relations But if you need to change them then the next one wow ID fields This is a different field that instead of listing all the possible values in this foreign key It will display I should have put a picture here. It will display just the just the ID Okay, a little button for search and a little button to delete so an example We have a list of cities we would have a field and say city 45 and and that would do the relation without spamming lots of HTML entities into the into the browser the row ID fields is cool, but it's not very very Beautiful it's better to use the Django Salmonella external application It's like the row ID fields, but it tells you the name of the field that you are using it's a little more more beautiful and more Usable, okay, so with this Django Salmonella instead of seeing city 45 you will see city Barcelona It's more usable by by the end user Another another little trick extending the admin templates in this case if you extend the the filter template The filters are what in in the admin in the sidebar at the right you have all the possible All the filters that you define in the model admin will be there if you have in example the city You have thousands of cities it will take a lot of space and and it's it's slow in the in the browser. It's It's a lot of craft So you can extend this filter and instead of doing a standard list using a selector and its team will select or the Standard form in this way it will occupy less space and it will be just a normal form that when you click it Filters you by this foreign key Okay Now I talked a lot about the catch and the catch is difficult because you have to Invalidate things and you have to know what to catch and you have to do many difficult operations If you know the centers that says that in computer science. There are two difficult things Cache invalidation and naming things okay cash a lot. It's not a joke. It's a very good software Django cash a lot is a system for caching the orm queries so the database accesses and Automatically invalidates them. This is a very cool project This is done by the there was another project called Johnny cash This is from the same people I think and this manages automatically the caching on the orm level He introduces itself at the at the middle of the orm and it does caching at table level This means that if the table doesn't change the the cache is still there once the table changes the cache is invalidated What can happen here? You have a table and you are writing all the time This could be a problem because you will be invalidating the cache all the time anyways I did some small tests and even if you do that having the database cache in the orm Improves your performance because usually inside of the same request You could be accessing the same that is the same row many times Okay And if you are not caching that just by caching it inside the request you can avoid a few extra queries So even if you are having a lot of rights Well my my I would say that you have to measure if it goes better for for your system Okay, this will take some space in the cache of course, but having an automatic system This project has very good code coverage and and well it's very it's the low-hung in fruit You just install this it's very easy to configure and your application gets much faster for most of the usual cases Of course if you have some specific things you can use the low-level API of young o cash a lot and and do caching in in in specific places or disable some tables or accommodate to your own case Use and workers do the slow stuff later Sometimes you have to do stuff that is slow could be CPU bound So the CPU is working a lot because I don't know you have to do generate a PDF Put it inside a ship file. Okay, this kind of things takes a lot of CPU. You don't need to do that synchronously Okay, that can be You just And a job system where you queue the stuff and and and you do it and You will have some your application servers, of course But then some workers and these workers will just run the tasks. Okay, the task can be Any kind of tasks not only CPU bound sometimes you have I don't know you have to go to a URL and do a post and That's that could be slow if you put it into a queue. You don't have to wait for this blocking operation to It can be done later. Okay And if you If you want to improve the performance, you have to identify these slow stuff and move it to another place This is also a very basic tip Cut sessions this is easy You just set this this setting in the Django settings and you have two options For non persistent sessions or for persistent sessions. So by default Django will save the sessions in the database Okay, that means that each time a user goes into your application. You will do a read to the database doesn't make sense I mean why you can have those in in the cache and and that's it so if it's non persistent it just keeps it in the cache and Once the cache is deleted the user will be logged out of your application But if you want persistency It's very similar it's caching the reads but then it will write. Okay once it's Not so often as the default settings, but it will eventually write the session Still all the reads will be avoided Persistent connections. Yeah To the database another Django setting that by default is set to false and you have to To enable it. Okay, and this says that Connection to the database can persist for right on all 60 seconds Otherwise it will close the the connection and open it again and close and open you can set it to true And then it's forever but connection I think it's better to close the connection after a few time because if you are having connectivity issues or issues with the database or the Observers or whatever goes wrong and keeps the connection The connections open you can have trouble because other workers won't be able to connect or other observers won't be able to connect So this should be set for I don't know a minute five minutes something like that The important thing here is not doing lots of connections all the time in the same second doing thousands of connections You want to avoid that? More things okay This is not Performance but scalability you do I did you do I this are the universal unique identifiers and by default Yeah, by default Django use normal primary keys sequential it is so the first row will be one the second will be true You do it is are different unique identifiers are not sorted are not ordered are not incremental So it's time a you ready is generated. It's totally random the chance of collision is It's calculated and it's negligible. So it will not collide Even if it collides you would get an error in the other is I mean oh these this key already exists Advantage is of using you you are this You guarantee the uniqueness so you won't have collisions What could happen here if you have two application servers the database gets disconnected or they are in different times on So the database gets a split it or whatever and you could have a new user ID 25 and in a disconnected Now disconnected machine creating another user ID 25 same ID what happens then you have a conflict You have a collision and that's not that's not nice Also, you do I this are very well indexed because they are using native fields. They are using Hexadecimal values, so it's not looking for a string. Okay. It's it's something very very well performant so Using you like this from the beginning makes it very easy to do that of a sharding you don't this D If you don't do this then later you will have to do a database migration to use to add you do this and remove the standard IDs in all the places and in the foreign keys And it's a crap going to through all the foreign keys changing these new you are this so do this at the beginning of your Of your project and then when you want to share the database it will be much easier Okay slow test not as a quality issue, but this is important anyways slow test used to be a bigger problem because right now we have Since Django one dot eight we have the keep db Option and since one dot nine we have the parallel option before that you had to do different hacks to avoid First the migrations if it's time you have to run the migration You have to run all the migrations for all the apps you can have tens or hundreds of migrations in Django In Django one dot seven Consolidating the migrations into a single one was not working very well was not possible Django one that it worked better, but running all the migrations make the test very very slow So when you run the test just use the keep db and it will not do the migrations. Okay running parallel This means that each test case will be run in parallel at the beginning the unit test system will create Instead of one database many databases if you combine this combine this with the keep database It will be very fast and in each of the databases it will start running the test cases Also for faster test you can disable things that you are not using example middle wars middle wars are usually a Suspected bottleneck because if you have custom middle wars doing lots of stuff It will get slow if middle wars usually go to the database or whatever and do the stuff do Validation do authentication these kind of things Installed application. It's not a big difference, but anyways if you are not testing an app remove it from the installed apps password hashes this is standard in the Django documentation use easier hashes and then Defy for example that it's not valid it's not valid for production, but for for the unit test It's enough because you are not testing the password hashes. You are testing the user creation example Also logging you can disable all the login with just one line Also use mocking whenever possible mocking means that instead of going to an external service an external database an example or running a Low program you write a mock that simulates that it's It's this external call so in example if you are connected to amazon s3 to upload databases And you do that. I don't know a thousand times inside your unit test that will be slow If you do a mock and just keep those files on the local system or in memory or in depth null Whatever it will be much faster because you will not have the overhead of going to the internet all the time Also for the philosophy of the unit test. It's better to test only your your logic nor the external services that could or could not be working So after all of these conclusions The first thing you have to do is to monitor to measure To find the bottlenecks Once found Optimize only the bottlenecks Go for the easiest stuff The 20 percent of the lines Spend 80 percent of the time So find those lines go for those lines and don't try to optimize everything because if you want to optimize every line That defeats the purpose And once you have fixed the bottleneck you have fixed that 80 percent Okay, but now in the remaining 20 percent the 80 percent of that will be in another bottleneck So you have to keep doing this again and again, right? And repeat a few External resources The official Django documentation is awesome. So it has a section on performance on database Scalability very good a book high performance Django. This book is very good. It's very oriented to production systems to have well Performance more than scalability scalability things also This is a must-have if you have Django systems in in production. It tells you everything In in my talk I have focused only on the Django things in this book You will see about other things about using a Gings aproxy varnish external systems that you can use to to make it faster So you don't scale Django only with Django things, but also with external things A block the block of instagram engineering instagram. They say it's the biggest Django project deployed in production Nowadays and in all the history, I think and they they post a lot of use cases They posted how how they increased all the Well all their systems when they started with the android application a few years ago when facebook bought them and they are posting Things all the all the time as engineers and also the data science block is Interesting too. They they talk about scalability issues And this is a document. Yeah, you can click here or google for this line latency numbers every programmer should know This is a link to to a university and and they say how much time does it take to Go to a local connection in the in a local data center Go to a connection from Europe to the United States How much time does it take to write one megabyte on the hard drive on an ssd hard drive to read To read from ssd to read from memory from another machine in your data center L2 cache l1 cache everything how to compute in the Inside the cpu. How much does it take to run an an instruction a hit a miss whatever this resource is very important because It happened to me. I thought that in example Going to the local hard drive would be faster than going to to an external machine in the same data center Okay, it's it's not it's not true going to another machine With with a network connection if the machine has a data in memory is much faster than going to the hard drive So you have to get these numbers and and and play a bit and and accommodate to them. Okay And that's it. Thanks for attending I will know the slides are already posted at slideshare And at lead ratings we are looking for engineers and data scientists feel free to contact me. Okay And that's it now if you have questions Anybody okay, nobody understood anything No do not say you have something searched nearly 70 thousand things. I usually deploy often so memory leaks are not a problem. I deploy often so celebrate is restarted so memory leaks are not usually a problem for me but yeah that can happen of course. Sorry again. Could be. Sorry I can't hear you. I have tested zero MQ I liked it a lot but usually I go with the easiest option and Celery was good enough but of course there are many different systems. Of course if your jobs are not time critical Celery is okay but if you need more performance there are better systems. Okay so if you have any more questions for David just grab him during a coffee break or during lunch and he'll be happy to answer all of them.
David Arcos - Efficient Django Does Django scale? How to manage traffic peaks? What happens when the database grows too big? How to find the bottlenecks? We will overview the basics concepts on scalability and performance, and then see some tips and tricks. These statements will be backed up with experiments and numbers, to show the timing improvements. ----- **Does Django scale?** How to manage traffic peaks? What happens when the database grows too big? How to find the bottlenecks? We will overview the basics concepts on scalability and performance, and then see some tips and tricks. These statements will be backed up with experiments and numbers, to show the timing improvements. Main topics: - System architecture - Database performance - Queues and workers - Profiling with django-debug-toolbar - Caching queries and templates - Dealing with a slow admin - Optimizing the models - Faster tests
10.5446/21124 (DOI)
Okay, hello, how are you? Hi, my name's Dave. I work at Koby. Koby IO. Today I'm going to be talking to you a bit about managing humans' clusters from Python using the open source project we wrote with Q. First I want to share with you my journey here briefly. And in some ways mitigate the, well inadequate and poorly prepared presentation that I'm going to give you today. I submitted my talk to URIFY about four weeks ago. And I was about here when I saw the email saying my talk had been accepted. I was like, hey URIFY just pulled my bluff. And I'm prepared to talk. I was like, oh it's okay. I'll work on it while I'm here. I went to the drama company. I'm here. I'm here. I'm here. I'm here. I'm here. I'm here. I'm here. I'm here. I'm here. I'm here. I'm here. I'm here. It was interesting to learn that this was an ambition for Portland. Nice. It was also quite distressing to see what the speed was. So that. So, okay Dave, don't be too objective. Don't be too upset. Why don't you go on a little drive? I'll go from Portland to New York. So I hired myself one of these and I started here. By the way, if any of you have seen the 1985 film The Boonies, early Steven Spielberg film, the partner who's been there, it's a really cool place. And I saw a lot of that on the way. I saw really cool things like that. And I saw some not so cool things like that. I do want to just have a picture of the cell site for how it feels. So I only saw you, so basically, I managed to avoid her. So after that, any miles, I got to my destination and I was really, really happy. And then I remembered that I should be doing this. So in my hotel room, I arrived in Gorbao yesterday, in my hotel room, I was doing my company, gets inspiration and start to develop a good reputation for the film. So I went out and I looked at that which was really cool. Did anyone go to the town square last month to see that? Really cool music, Catholic music, really, really cool. And I was a bit techish, so I had one of those. And I had one of those. And I had one of those as well, and I really, really, really, really, really full up. I thought, no, I'm going to get a picture of you. The fact is that you're not going to get a picture of me. And it had my name coinciding now and I didn't hear it. So we ride monitors at a time and let me go. I started to work on a project called the system that monitored monitor systems, which was a bit of a silly idea and it worked out very well. So we had a slight pivot and I... We took the aging technology from our monitor, the monitors, and turned that into a general purpose monitoring solution. We built up the product over a decade and it was, to be honest, it was never that great because there's a general problem with monitoring and it's all about like, the employer's a software as a service. Now we chose Google Compute Engine and GKE. And GKE is the hosted Kubernetes implementation that Google provide. And we were kind of quite keen on dog fooding and we wanted to use our own solution to monitor our stuff and also we needed to provision monitoring instances for customers who were signing up to the SAS so that they could actually, you know, click a button and a monitoring instance be deployed on Kubernetes. So we needed to interact with the Kubernetes API. We're all Python people so we had to do that with Python. So that's why we wrote Q. Now it's important to say that there are other Python API wrappers out there. And I'd strongly encourage you to go off and look at them so you can see how rubbish they are and how good ours is. No, seriously. Go off and have a look. They might be better fit for you. And also if you do end up choosing one of these instead of Parkube, it'd be really great if you could come back and tell us what you thought was better because that would be interesting to know and we could try and fix anything we've got going on in ours. But we wrote ours, we wrote Q in the way we wrote Q because we wanted to abstract away from what's a somewhat moving target as far as the Kubernetes API goes. It's also got some idiosyncrasies that you don't necessarily need to be exposed to when you're interacting with Kubernetes Firebase API. So we wanted to kind of like create an opinionated version of the API that made things a little bit more palatable for the user of that API. We also wanted a clean watch interface. Watch interface allows you to get notifications of changes to resources within Kubernetes and some of the other offerings that are about at the time, some have come and gone since we started. But the ones that were about didn't do that very well. We wanted something with Python and we didn't want to use code generation swagger and that's a theme in some of these alternatives. But you guys take a look and take your choice. So I hate it when speakers say, oh, hands up. Do you know this? Do you know that? But just a very quick show of hands is who has kind of been exposed to Kubernetes? Okay. So third, okay. I'll try and railroad through this because time is a premium. So Kubernetes is about orchestrating Docker containers, essentially. But not just Docker containers. Docker came out of Truity file systems and if any of you have been exposed to Solaris and Solaris zones, similar kind of concept. What it gives you is basically an immutable deployment component. It's easy to author. There's a runtime that runs on many platforms and it allows you to develop an immutable deployment component that underpins DevOps practices and continuous deployment. Across multiple nodes, it's hard to manage Docker containers in the raw. So especially for scale and resilience. So that's why control planes like Kubernetes and Docker swarm came about. And I mean, Google already doing this. Google had a system called Borg which uses LXC containers and it manages those across their enterprise. It allowed Google to scale developer productivity and the number of services they're offering to their customers internally and externally without the corresponding increase in operational overhead. So there was obviously, you know, it was a useful technology. Kubernetes has had an amazing amount of momentum behind it. It's interesting how many, in quotes, competitors have actually got behind Kubernetes where initially they thought to position themselves as direct competitors to Kubernetes. They kind of seem to acknowledge that they have a particular sweet spot. Maybe it's in cluster management or managing containers on very, very large scales. And they seem to have all sought to accommodate Kubernetes in their offering and in their space. So, you know, the only sort of like, sort of offering that hasn't really done that, I suppose is Docker swarm because that is, you know, trying to do exactly the same thing. So how does it work? There you go. Happy with that? Excellent. Yeah, it's not very helpful, that kind of diagram, you know, and it just takes forever. I haven't got time. So I just want to go through some key concepts. So in Kubernetes we have the idea of a cluster which is a single homogenous cluster of nodes, compute resource. Watch out for a thing called Ubinities, which is kind of an attempt to federate multiple Kubernetes clusters so you can have basically multiple clusters with different kinds of shapes of resource running inside it. A node is some resource where pods are scheduled. And pods are the smallest unit of scheduling that runs the actual container. So Docker, but not exclusively Docker, there's also support coming for rocket containers. And the Docker containers run inside the pods and those are the things that the Kubernetes system schedules. There's the concept of replica sets. And what they do is they, it's a specification that defines the pods and how many replicas of those pods there need to be for scale and resilience, amongst other things. There's also services. Services target pods and expose their capabilities at the edge of the Kubernetes cluster. So you can think about the actual Docker containers, I suppose, you know, as nano services and then the actual pods or microservices and then the service definitions provide actual services for a consumer. Labels are an interesting thing and we'll be looking at those very, very quickly. And they're key value pairs that are associated with resources within Kubernetes, but they will be a scheduler to organize the objects within it. And there's lots of other stuff that we could talk about, but that's probably enough to get us going through the next part of the talk, something, something 30 minutes. So some other, some key concepts for Q. We need to get the terminology straight really. Right at the beginning is principally the API, the Kubernetes API defines, defines kinds and defines resources. So a kind is the name of an object schema, essentially it's a resource type. And a resource is a representation of a system entity that's center retrieved from the API by Jason over HTTP. And resort, there's two types of resources, collections and elements. So these are kind of Kubernetes terms I'm using, but they do map quite nicely into, into Qube. So for example, a pod is a pod resource, whereas nodes is a node list resource and that's a collection of nodes. So try and bear that in mind. Additionally, it's really important to understand the separation of specification and status in Kubernetes. When an API update is made, the specification of the resource you're updating is made. And that's available immediately. So that's almost like an atomic operation. But over time, Kubernetes will work to bring the status of the resource whose specification has changed up towards that specification. So the system will drive towards the most recent spec. And that makes the behavior of Kubernetes level-based, not edge-based, which is quite a nice feature. So, okay, so now the tricky bit. I'm going to open a terminal window. Bear with me for a moment. I'm going to try and mirror this display. Yay! That works. Okay, cool. Okay, so what I've got running here is, um, yes, sorry. Oh, I could. Yeah, I should have tried that before. Say when. Okay. So I've got a single node, Kubernetes cluster, running on by MacBuckair. By the way, if anyone's interested, it was really easy to do. If anyone's interested in knowing how to do that, just come and see me at the stand and we can have a chat about it. It's quite cool. Right. So, um, what I'm going to do, okay, so, Python, cube. Here we go. So what you do is just import cube, spelling it probably right. Remember that America thing where I was like really jet lagged and was up till three o'clock in the morning trying to write this stuff. So, be nice. So, import cube. So the key entry point in the cube API is a cluster. So we can say something like cluster equals cube.cluster and we create an instance of one of those things. So that gives us a cluster object. So there's, if you want, if you're, okay, so one thing I forgot to mention was, um, when you're interacting with the Kubernetes API, the preferred approach is to run a cube control proxy. And what that does is it proxies the cube control API from wherever it's running to local host on your machine. So what I've got here is quite simply cube control proxy. You can just go out and see it down the screen. So, if you're running your Python code using cube, you know, in a container, then what you normally do is have a sidecar container inside your pod. So one's running the proxy and one's running your Python code. Okay. So the other thing I can do here is I can specify a URL if the proxy is running on a non-standard endpoint or port. So we can just say local host, port number 8001 slash API, something like that. And then we get, obviously, cluster instance. Can use context managers as well. So, yeah. So you can say something like cube with cube.cluster as, let's say, k.k.notes. We'll talk about this in a sec. So the k.nodes is returned a node for you and that will become clearer in a minute. So there's a few ways to actually create your entry point as a cluster. So do you remember I was talking earlier about collections and elements? Well, they're represented here as views and items. So this kind of nodes thing here, because it's plural, you can see that's actually a node list. So that's a collection. And I can iterate over that to get actual view items out. So let's have a look at a few of these. So we've got cluster.nodes. The cluster object has a few of these things, many of these things. You can look at the documentation, it's all on read the docs. It's a working progress, but there's some essentials in there. And we can see things like the clusters, replicasets, and namespaces, and that kind of stuff. So, okay. So we want to get a resource item out of here. So I can say something like RS because I'm going to get a replica set from my cluster.replicasets and I can do a.fetch and I need to specify the name. Now, something I prepared earlier, I have... This is just a Qt control on the line. And I can get the replica sets and I know I've got one called service demo. So I'm going to say get me service......demo. But I have to specify the namespace. So I have to say namespace equals default. It's the namespace that that's running in. You see how bad my typing is? Shouldn't be allowed in the alive demos. And then we look at RS. We've got a replica set item now. So this is an actual element as opposed to a collection. And it's got some attributes associated with it. So I can say look at some metadata and I can see what the name is and, lo and behold, it's service demo. So I actually got to give them the right thing. And I can see what namespace that came from. And I can also see what labels are associated with that particular replica set. So... Sorry, again a bit close to the bottom of the screen. Okay. So what's important to remember about resources is their version. Kubernetes versions, all of the resources it returns across the API. And if you remember when I was talking about the separation of spec and status, when you get given a resource item back, it's versioned. So, you know, we can see here that RS.meta.version is version 1561. So one thing I forgot to mention was I think I did mention briefly is that these collections are actually iterated. So I can do cool things like a list of comprehension. So I can say RS for RS in cluster.replica sets. And you get... If there's anyone, you get a list of those things back. So I can build a list, say for example, saying node.meta.version. For node in cluster.nodes. So that's going to give me a list of all the versions for all the nodes in the cluster. So there's anyone, and at the moment it's that version. If you keep doing this, eventually you will see a different version coming out. So that means the state of the node resource has changed because something's changed about the node. It's used a slightly different amount of CPU and that's been reported and actually whatever. So when you're interacting through Qube, you need to make sure you've got the latest version of the object. Otherwise, you know, you could be looking at stuff that's wrong or out of date. Okay. So back to labels. So let's have a look at our replica set object that we had and it's got some labels associated with it. It's got one. It's actually a dictionary. So I can do stuff like run the run attribute and get the value service demo. It is however immutable, so you can't mess up that way. And that was kind of a design decision of ours when we were writing Qube that we wanted every operation to be done by a set call. So you can update the label using a set command. So you could say something like, let's add a new one, foo, and we'll set it to label.set foo and we'll give it a value of a bar. That's not predictable at all, is it? Okay. So we get one back. Okay. Let's have a look at RS. See if anyone knows what's going on here. RS.meta. There it was. And it's not there. That's really annoying. So that's because I'm actually looking at the old version of RS because that's the one that got returned from an earlier call, that version. So what I can do is this. I'm going to be fancy and use a list comprehension because I know what I'm expecting. RS for RS in cluster.replicasets. And I'll get the first value out of that and I should get the second value out of that. So let's have a look at RS.version. Lo and behold, it's a slightly different version. Yay. Okay. So now let's have a look at RS.meta.labels. And we can see that our foo attribute is on there now. So that's really nice. And so updating a label is kind of the same as creating a label. So I can do that. It's returned by a new replica set, which I didn't assign to a variable. So I'll do that. I'll look at RS.version. We've got a new version. Let's have a look at the labels and we've got Baz set on there. So cool. So to go on about labels a lot, one, it was kind of easy to do in this. And the other thing is they're a really good way of managing your Kubernetes cluster. If you want to manage the way your resources, you know, pods, services, etc., are managed, then, you know, setting and resetting labels is a good way of doing that. Okay. So we can also delete labels. So we can say RS.meta.labels.delete and we can say we want to delete foo. I'm not going to be caught out a second time. I'm actually going to assign that to RS and then look at RS.meta.labels and foo is gone. Okay. So that's kind of the end of the, that bit, the live code demo bit. So I'm going to go back to Arrangement. Okay. Thank you very much. Okay. So briefly talk about some of the features that I haven't got time to demo. And in the latest version of Kube, we've got creating and deleting resources, which actually makes it quite useful. So you can actually go in, create pods, delete pods, replic sets, services, namespaces, all that kind of stuff. It's just a simple create call. You pass in a JSON specification and it does sense a call to Kubernetes. We've also got a watch API implementation, which, you know, say, come by the booth and let us show you because it's really cool. And my colleague who wrote that bit actually wrote a blog about how it was tricky. And he's done all the great service because he's insulated it from all the horrors of how to do watch support using Python and over H2P. And yeah, it's neat. There's also, if you remember the fetch command that I used to get resource items from collections, there's actually a filter capability. So you can filter the return results on label values, which is also really nitty and really cool. And I didn't get a chance to show you. Finally, the cluster instance, which is your entry point, has a proxy to the Kubernetes API. So if all else fails and you want to get to the actual API in Python while you're using Qube, you just use cluster.proxy to do that. Okay. So time for questions. It's on Bitbucket. And I'm really interested to hear if you all think Bitbucket sucks and you want, you should be on Git or if it doesn't really matter to people. I've had a sharp intake of breath from some audiences when I've asked them about that. And anyway, that's where it is. Happy to move it. Check us out on Qobi.io. I'm Qobi CTO. Follow me on Twitter because I'm funny. And yeah, so I'll take questions if we've got time. Oh, no. Thank you. Hello. So I have a kind of question. So I'm just going to condescend into what, which days are you here? Are you here all of today and tomorrow and throughout? Okay. So the question was, are we here or we can, can you come and see us? And yes, we're in the vendor area. We've got like one of those little booze on the green and yellow carpets. You'll see that fancy graphic should be up on a monitor. And we can show you, you know, where we've got, still in beta, but we can show you where we've got and you can have a chat with us. And my colleague, by the way, is one of the developers on Pi.test. So some of you may have heard of him anyway. So, yes. What was the inversion of the event from Kubernetes? Okay. So the question was, what version of Kubernetes are we working with here? It wasn't. What does this version of the event, you say it changes when something happens? Okay. So it's just an opaque number. And it just represents a version of the resource compared to the last time the resource changed. Now, depending on the resource type, the kind, that could be anything. So for example, for a node, it could be because some of the nodes attributes have changed. It could be because a label has been updated on a replica set. So that's when the version number is incremented. Not just when you make the call, but when Kubernetes itself changes the resource. No. So what I should have said was, don't rely on the version numbers. Just always get the latest version of the object. So I showed you the version numbers. I was kind of advised, don't show you people the version numbers. But I just thought it was an interesting thing. So I did that. Yeah. Any other questions? I think you had a call. Okay. I think I can give a call. So I'm using type for testing of tests in Kubernetes. Okay. I was wondering how do you guys represent namespaces? Okay. Come to the booth, ask my colleague. And because, yeah, and he'll give you a really good answer. I'll give you an average answer. He'll give you a really good answer. So thanks, guys.
David Charles - Managing Kubernetes from Python using Kube Kubernetes is the Google Borg inspired control plane for Docker containers. It has a great API but needs a load of HTTP client code and JSON processing to use it from Python. This talk introduces Kube, a Python wrapper around the Kubernetes API that enables you to manage your Kubernetes cluster in a pythonic way while avoiding any Kubernetes API peculiarities. Programmers and operations folk who are interested in interacting with the Kubernetes API using Python. ----- Docker has had a transformative influence on the way we deploy software and Kubernetes, the Google Borg inspired control plane for Docker-container- hosting-clusters, is gaining similar momentum. Being able to easily interact with this technology from Python will become an increasingly important capability in many organisations. I'll discuss what the motivations behind writing Kube. We'll dive into Kube using the Python interactive interpreter, getting connected to the API, and simple viewing and label update operations. Finally I'll discuss more advanced resource management activities like Kube's 'watch' API capability. ## Objectives Attendees will learn about the key concepts in getting resource information out of their Kubernetes cluster using Kube. ## Outline 1. Setting the scene (3 minutes) 1. Other Python kubernetes wrappers (2 minutes) 1. Kubernetes concepts quick recap (5 minutes) 1. Dive into Kube in the Python interactive interpreter (10 minutes) * Outline prerequisites * The entry point - a Cluster instance * Views and Items - two important Kube concepts * Item meta data: labels and versions 1. More Kube features (5 minutes) * Creating and deleting resources * Using Kube's Watch API support * The cluster proxy attribute for when you need to get at the actual API. 1. Q&A (5 minutes)
10.5446/21125 (DOI)
I'd like to introduce Dimitri Trophimov, who's the team lead and the developer on the PyCharm team, and is going to talk about profiling. Thank you. Hi. You are brave people who are interested in profiling and don't afraid of talks marked as advanced. Actually, when I saw this talk in schedule marked as advanced, I was scared a bit myself. It won't be that hard, I hope. So, first, I briefly introduce myself. My name is Dimitri Trophimov. I work for JetBrains, I'm team lead and developer of PyCharm IDE. My talk won't be about PyCharm directly, but I will use this debugger as a case study for profiling and optimization. If you want to discuss anything about PyCharm, just come to JetBrains booth in the expert hall to talk with the team. Being involved in the development of PyCharm, I have done a lot of different things. But the runtime aspects of Python, like debugging, profiling and execution, interested me more. Today, I want to show you how usage of statistical profiler can help to optimize program. And this program, as I've said already, will be a Python debugger. I will try to stay on a high level using the debugger as an example and touch its details only if necessary. So, let's begin. The best theory is inspired by practice. The best practice is inspired by theory, said Donald Knuth. I like this saying. What I'm going to show today is inspired by practice. It was a real problem and to some extent still is. And the approach, the solution to it that I will show later, was also real. It was actually done at some moment. And if you're interested in it, you can look into the code. But also very interesting is that when preparing for this talk, I tried to rationalize things and to look at the process which happened in the past from a bit more theoretical perspective. As if I did that again, but more in the right way. And actually that opened some knowledge for me and gave me some ideas that I will implement in future. And I hope that you find something interesting in this talk too. So, is it happens quite often in our software development work? We start with an issue ticket in the bug tracker. So, the issue says debugger gets really slowed down and it provides a code sample. And so, we see clearly that this issue is about Python debugger in PyCharm. PyCharm debugger. That's some part of the PyCharm that's written in Python. That's the same debugger that's used in PyDev AD. That's an open-source project that is maintained by Fabio Zadroszny, the author of the PyDev. And also it's maintained by PyCharm team. To understand better how the debugger works, I recommend to listen to the recording of my talk at EuroPython 2014. This is called Python Debugger Uncovered. But now I will remind some basic concepts. PyCharm debugger consists of two parts. The part on the IDE side or the visual part is responsible for the interaction with the user. It communicates with the second part that lives in the Python process. This second part, the Python part, receives breakpoints and commands via socket connection. And sends back some data if needed. And the data can be the values of variables and stack traces and notifications about breakpoints hit. So that's the Python application with some threads, IO and separate event loop. And it's actually always running in the background of the process. And that all can lead to some performance overhead. And the core of the Python debugger is the trace function. That is actually the window through which the debugger looks to the user code and sees what's happening there. Python provides an API for tracing the code. It is a function called setTrace. It gets a trace function as an argument. Then the trace function is executed on every event that happens with the user program. An event like line execution or function call or exception or return from the function. There are a lot of checks that trace function performs. For example, it checks whether there is a breakpoint for a given line. And if there is, it generates a suspend event. So I think you've got an idea how debugger looks like. There are some threads doing communication with the ID in the background. And there is a trace function that gets events about executed lines. So let's go back to the issue ticket. When the code is executed normally, it runs for three seconds. In the debug mode with the breakpoint, it executes for 12 seconds. But in the debug mode with breakpoint, it executes for 18 minutes. It's very long. And let's reproduce the issue, whether it actually exists. So we open PyCharm and we have this code. And actually, not to wait 18 minutes, we will reduce the code snippet. Actually, about this code snippet is just... Let's kill this one. Actually, that is a simple function with one iteration through the range. The only interesting thing is that the range is quite big. And we have here an increment. So let's reproduce this issue. We just run it. It was fast. Then we debug it. It was a bit slow, but also fast. And then we place a breakpoint. And we... Then it works. Yes. So the issue exists. Let's analyze this issue. So we have here three different cases. Normal run, debug without breakpoints, and debug with a breakpoint. And actually, as we can place a breakpoint in different lines, there are three more cases. So it's debug with a breakpoint in the function, debug with a breakpoint in the same file, but not in that particular function. And debug plus a breakpoint in some other file. But testing shows that the last case actually behaves the same as debug without breakpoints at all. Breakpoint is some other file doesn't affect performance at all. So we won't look at that case. So basically, we have four different cases. So for our four cases, we have two cases with breakpoint in the function and breakpoint in the file. The debugger works slow. William Edwards, Damian, famous engineer, statistician, and management consultant said, you can't improve what you can't measure. So before we do anything else, profiling or optimization, we should be able to measure the performance of the thing we want to make faster. In our case, the core of the sample code is an iteration. So we use model time to write how many seconds it took for the iteration to complete. So that will be our simple measurement. And after we apply this measurement to our cases, we see that the two cases with debug with breakpoints actually work 100 times slower than normal run. Which is a bit sad, but who knows. Maybe in this particular case with this example, it's not possible to make any better. So we need to compare this with something, with some program which does the same thing and have more or less the same functionality. And we choose PDB for that. Although it is less functional than PyCharm debugger, but it is sufficient for our comparison. You can place a breakpoint and PDB will stop at it. It is also written in Python, so it is in the same class. It wouldn't make any sense to compare with something written in C because it has different application. So PDB is in standard library, so it sounds natural to take it as a performance standard. And now we can make benchmarking. After we took PDB as a standard, we can apply the same measurement to it. And then we can compare results with our debugger, which now will become a baseline in terms of benchmarking. And what we see is that PDB being a bit faster still suffers from the same problem. In the cases where breakpoint is set, it has... The performance drops down dramatically, but still it is a bit faster. It takes five seconds instead of nine. So we can try to reach its performance. And the first thing we need to do to make the code faster is to find a bottleneck. It doesn't make sense at all to optimize parts of the code that doesn't influence the overall performance. And the part that influence the overall performance the most called a bottleneck. So let's find it. And the best way to do that is profiling. Profiling is the way to look at your code from a different perspective to find out what calls what and how long it takes for that to run. Profiling is a set of statistics that describes how often and how long various parts of your program are executed. A tool that can generate such statistics for the given program is called profiler. Let's use a Python profiler. But first we need to choose one. So let's learn about Python profilers available. If you are looking for a Python profiler, you will find several of them. The most obvious choices are C profiler, YAPE, and line profiler. C profiler is part of the Python standard library. It is written in C. Python documentation says about it, C profiler is recommended for most users. It's an extension with a reasonable overhead that makes it suitable for profiling long-running programs. YAPE profiler is almost the same as C profiler. But in addition, it is able to profile separate threads. Line profilers were different from two previous profilers. It provides statistics not about functions that are executed, but about lines inside the functions. Also written in C, it provides a rather high overhead because it traces every line. As C profilers is a default choice and we don't need the features of YAPE and line profiler, at least yet, let's use C profilers. We do that in PyCharm. For that case, we will have a bit, or some code will be changed a bit because we need here to use, at the same time, debugging and profiling. We will set up debugger from the source code. We will put a breakpoint here. What we do now is we profile it and we continue. The task started. We wait until it finishes. After that, it finishes with C... No, sorry. That is not what I wanted to show. Let's do that one more time. We continue. The task started and we wait until it finishes. Yes. And we look at the call graph. We see here a lot of calls, but actually if we look closer, we will see that all of them actually take zero milliseconds. That are internal calls of debugger. And the calls that took most of the time, actually, there are two of them, are user code. That's our function and the main work. So basically what we are seeing here is that Cprofile didn't show us any useful information. Is our debugger unprofileable? Or should we use YAPI or line profiler then? Actually, if we do, we will see that they don't show anything neither. And so why is that so? Why is it so? It doesn't work. Okay. To answer this question, we need to learn a bit about how Cprofile, YAPI and line profiler work. Cprofile provides deterministic profiling of Python programs. What does deterministic profiling means? There are actually two major types of profilers. Tracing profilers and or deterministic profilers and sampling profilers, also called statistical profilers. Tracing profilers, they trace the events of the running program. And then can be a function call or execution of a line. That is the same as we had with the trace function in our debugger. The disadvantage of such profilers is that as they trace all the events, they add significant overhead to the execution. As for the debugging, Python provides an API for the profiling. The function responsible for that is called set profile. It is almost the same as set trace with the only difference that the function that we pass to their profile function is called, isn't called for every line. It's called only for function calls. All these profilers use a set profile or set trace function to set up the profiling. And that's why they profile on the user code. And our debugger, which also uses a set trace, turns out to be out of the scope of set profile. So all these profilers aren't applicable in our case. So is our debugger unprofileable? Actually, there is another type of profilers. It's called a sample or statistical profilers. Such profilers operate by sampling. A sampling profiler captures the target performance call stack at regular intervals. Sample profilers are typically less specific and have, and sometimes not very accurate, but they allow to run the program at its full speed. So they have less overhead, which in some cases make them actually much more accurate than tracing profilers. Finding a statistical profiler for Python is not that easy as a tracing profiler, as there is no obvious choice. But if you search enough time, we'll find several statistical Python profilers as well. That are start-proof, plop, intlvitune amplifier, and VM-proof. Let's have a closer look at them to choose the one that we'll use to profile our debugger. Start-proof is a sampling profiler written in pure Python. It's open source. It doesn't work, unfortunately, on Windows, on Linux, on Mac, and Linux. It works, but it's quite minimal. And last time it was updated was long ago. Plop, or Python low overhead profiler, is written in pure Python. So actually, it's funny, but it's not that low overhead is it could be. And it doesn't work on Windows neither. And its main page on GitHub says that it's a work in progress, and it's pretty rough around the ages. So not our choice. Intlvitune amplifier. It is very accurate, has low overhead, but it is proprietary and not open source. You need to buy license to use it, which may be not the worst thing, but it isn't as suitable in my case as it doesn't work on Mac OS X. And VM-proof. VM-proof is a lightweight statistical profiler that works for Python 2.7, Python 3, and even PyPy. This profiler was developed by PyPy team and presented a year ago at EuroPython 2015. And since that has been developed and actively enriched, it's stable state. It is written in C, so it has a really low overhead. It's open source and free. And actually, it's very great, it's open source, because it allowed me, for example, to add line profile and feature to it during preparation for this talk a week ago, which would be impossible if it weren't open source. So it seems that it's a profiler of our choice. Let's try to use VM-proof to profile our debugger. And we do that again in PyCharm. So we'll use another run configuration for that, the same source code. And we press profile button, we continue, we wait until the main task finishes. Yes, and after it finishes, we see that we have here a call tree. Actually, that is a nice feature of a sampling profiler that provides you with a call tree, where you can see actually how your program was executed with timings. And we see here that the most of the time was taken by our trace function, that is, the trace function for our debugger. So that is the bottleneck. Our trace function itself is a bottleneck. Everything else, not threads, not ION, it's a trace function. So we found bottleneck. What should we do next? To make our program faster, we need to optimize it. And optimization can occur at a number of levels. Typically, the higher levels have greater impact. The optimization can proceed with refinement from higher to lower. At the highest level, the design may be optimized to make best use of the available resources and expected use. The architectural design of the system highly affects its performance. But in our case, we are a bit limited with our design decisions, as we need to comply the set trace API contract. So this optimization level isn't available for us. Given an overall design, a good choice of efficient algorithms and data structures and efficient implementations of these algorithms and data structures come next. Let's see whether we can make an algorithmic optimization. To find the way to optimize our debugger algorithmically, let's ask ourselves a question. Why does debug without breakpoints work so much faster than with breakpoints in the executed file? If we look into the code, we will find out that in case there is no breakpoints in the current file, the trace function returns none. While if there are any, it returns itself. In the middle of this function, we get the breakpoint of the file. If there is none, then we just return none. If we refer to the documentation again, we see in the last sentence that local trace function should return a reference to itself, or another function for further tracing that scope. Or none to turn off tracing that scope. If we don't have breakpoints for the file, we turn off the tracing for the scope altogether. That's why it works very fast. And why don't we do the same for functions, not for file? So we can add a little change. We store the name of the function where the breakpoint is placed. And then if we don't have breakpoints for a function, there is no need to trace it. We just return none. If we measure the performance of this optimization, we see that our function started to work 110 milliseconds instead of 9 seconds, which is a big deal. Beyond general algorithms and their implementation, concrete source-core level choices can make significant difference. So our next optimization will be on the source level. But to make such an optimization, effectively, we need to go to the source lines level. For that line profiling can be useful. But line profiler won't help us in that case, as it is implemented by trace function. Instead, we use a special mode of VM proof profiler, which was introduced there recently. And it enables capturing line statistic from stack traces. Let's use it and see how it works. We will again run it in PyCharm. We will use another run configuration for that with the line profiler mode enabled. And we use the same source. And we press profile button. And we continue. So after it finishes, we see our trace dispatch function. And now what we can do is go to source. And in the source, we see a heat map, which shows us which line took the most of the time. And it's very strange, but the most of the time was taken by this particular line. It was 20%. And 330 hits from nearly 1,500. Actually, what that line does is that it checks whether we need to trace this particular thread or not. That's it. So if we see that those two lines in the beginning, they are not related at all to this line. So what we can do is to move this line in the beginning of this function. Let's do that. So we'll just put it here. And also, if we're thinking about how to optimize this source, we can remember that getAtter is not the optimal way to check whether an object has an attribute. Because getAtter makes a lot, a lot different things. So what we can, how we can rewrite this is we can write it. Oh, no. It's not very convenient to write it. Okay, I won't type because my setup doesn't allow me to do. So we rewrite it this way. So we just check whether this attribute which is used as a mark is in the dict of the object. And after we check the performance of this, we'll see that this source optimization actually gave us one second. There are several low-level optimizations which aren't available for Python. Being an interpreter, Python doesn't have build, compile, and assembly phases. Runtime optimization is possible in Python because runtime optimization is, for example, GIT, just in time optimization, but it's available now only for PyPy and not for SIDEM. So what to do? Did the optimization reach its limit? Actually, if all high-level optimizations are already done and Python doesn't permit us to go deeper, we need to go beyond Python. Maybe we should rewrite everything in C to improve the performance. But in that case, we will lose the compatibility with Python implementations other than C Python. For example, Python, Python, Python would become incompatible. And having two implementations of the debugger, one in Python and one in C, will make adding new features a lot more harder. If we could just leave our Python code as it is, but still optimize it a bit on lower level. So solution exists. It's called SIDEM. SIDEM is a static compiler for Python which gives the combined power of Python and C. That is an example of a program written in SIDEM. It looks exactly like normal Python code except that declaration of variables in the second and third line, these declarations have type information which allows SIDEM compiler to generate more efficient code. So this basically provides us with another level of possible optimization inaccessible before, namely compile optimization. Let's add SIDEM type information to our trace function implementation. So after we compile our trace function with SIDEM as a native extension and measure its performance, we'll learn that it's made our debugger more than twice as fast, four seconds instead of nine. So now we can compare all three optimizations combined with the baseline, our initial version of debugger, and with the PDB, our goal. And we see that we have reached the goal and actually done even better. Yay! Happiness. But to better our happiness, I will say that after we compiled our debugger with SIDEM, it became a native code which can be profiled with VMPROP well anymore. So it is unprofileable again, ironically. But there are still ways to profile it which will leave out of the scope of this talk today. And the issue, we managed to double the performance for the sample code from the issue ticket. And we made it better than PDB. But still in this particular case, it works slower than run. And maybe it is possible to make it even to work even more faster given the constraints of the set trace API and so on. But still maybe there are ways to optimize it. So we'll leave that issue open for a while. Conclusion. Use profilers to find bottlenecks in your code. There are different profilers. Each has its own advantage. Learn about them. Start to optimize things from the higher level to lower. And to optimize Python all over level, use SIDEM. So that's all for today. Thank you for listening. There are links for VMPROP profiler and debugger if you are interested in looking into the code. Actually, this feature of LAN profiling was added to VMPROP recently. So it's not available in PyCharm yet. But it will be available via a plugin. I will publish it on this week, I hope. So thank you very much. Thank you very much Dimitri for this great talk. So the floor is open for questions. My biggest issue is memory profile. Can you help me do that? Actually, in this particular case, memory profile wasn't an issue. If you are interested in memory profile, I can recommend to look at the VMPROP because it supports memory profiling. The only thing it doesn't support yet is profiling of the native memory allocations. But that's actually quite a hard problem in Python. So if you have a pure Python code, VMPROP can profile your memory. And actually, in Python 3.5, there is an API for memory profiling. I don't remember how it's called. I think it's called memory profiling. So you can look at it also. Any questions? Hi, I'm Ekhovalski and I wanted to ask maybe a new question, but isn't writing the code in a site on somehow also rendering it incompatible with other Python implementations? Yes, that's a great question, by the way. Yes, it does. If you just add a CDF into your Python source, it won't be compatible anymore. But what you can do and what we did in PyCharm Debugger is we had these site and optimizations optional. So the only change that you need to make in your Python source to be it site and compilable is to add these CDF definitions in the beginning. So we used a little template language. So in our source, these CDF definitions are commented out. So the source is running as a normal Python source. But to build Python extension, we uncomment these lines and the source became site and compilable. I can show you, actually, it's better to see than to say. So here we have like this is a custom template, small language, and it says if it is Python, then we have this header. If it's not Python, then it's normal Python. So actually this source works for all Python implementations. And if we need to compile that, we do it with a setup.py where we uncomment this, in case of this site. Any more questions? Well, if not, please join me in thanking Dimitri again.
Dmitry Trofimov - Profiling the unprofilable When a program is not fast enough, we call on the profiler to save us. But what happens when the program is hard to profile, like for instance the Python Debugger? In this talk we're going dive deep into Vmprof, a Python profiler, and see how it helps us find out why a debugger can be slow. Once we find the culprit, we'll use Cython to optimise things. ----- Profile is the main way to find slow parts of your application, and it's often the first approach to performance optimisation. While there are quite a few profilers, many of them have limitations. In this talk we're going to learn about the new statistical profiler for Python called Vmprof that is actively being developed by the PyPy team. We'll see how it is implemented and how to use it effectively. We will apply it to an open source project, the Pydev.Debugger, a popular debugger used in IDE's such as Pydev and PyCharm, and with the help of Cython which we'll also dig into, we'll work on optimising the issues we find. Whether it's a Python debugger, a Web Application or any other kind of Python development you're doing, you'll learn how to effectively profile and resolve many performance issues.
10.5446/21126 (DOI)
Okay, good morning. Thank you for coming. Please receive two engineer domain courses. Hi everyone. Welcome to Europe. I'm really excited to be here for yet another year. Just before I start my talk, I'd like to say a little bit about myself so you'll better understand the context of it. I've been interested in the software distributions since basically I was a student. I was using Jain2 back at times developing for Google Summer of Code a project to package Python automatically for the Jain2 platform and so on. And in the last three years I've been working on NixOS. It's a Linux distribution. Probably heard of it. And I'm tackling the problem of how to distribute all those packages to people and make it easy to use. And it turns out it's not. So I'll talk about how Haskell does it and how that compares to Python and what we can learn and what things we already know, but we just can't get there because it's complicated because of our legacy. Currently I'm working for a company called Snob and we're doing open source networking software and I'm infrastructure engineer so I'm setting up the whole pipeline for testing and benchmarking them. So my py, right, we got types in Python so clearly that we are improving Python even though it's more than 25 years old and Haskell is definitely inspiration here. So clearly there are things to improve and to learn upon. So let's start how Haskell does packaging and their tool is called Cabal and you would have a file like this. It's a special kind of syntax. And at the top you'll see just some metadata about the package and at the bottom you'll see you can say, okay, my software is a library but there is also an executable and it has these dependencies. It lies in the source directory and so on. So one thing that you will figure out is compared to Python, this is just a file that you can parse and in Python we have this script and we have to run for actually to do something. I'll dive into that a bit later why that's a big difference and how that affects everyone pretty much. So if you think about the API, in this case in Haskell you would parse this and get the metadata back in Python the API is setup function which does everything like literally everything. The format is more approachable and we'll see that a bit later. So one thing if you were careful enough you notice this builds type line in that file and if it's to specify the simple that means you parse that file and you have all the information you need to install that package in Haskell. But also you can say build type make or build type custom and in case of make it will run the make files and it will skip the Haskell building process and in case of custom it will run a Haskell program with specific hooks where you can specify code. So you have the power to go from very simple to overriding. Fortunately the custom method is not used because it's fairly poorly documented but that's also a good thing because people fall back to simple. So in Python we have PEP 518 which is I think it's not accepted yet but it talks about basically how to hijack setup tools build process and you can define your build process and this is in progress and you'll be able to you'll be able to go and not even touch the setup tools machinery and do whatever you want. You'll have the freedom to for example write a make file back end for Python packaging and of course this will be integrated into the PEP and so on and all the tools which is really nice because finally we'll be able to go forward from the legacy that we've been stuck. So just a little bit about advanced features in the Cabal. For example here in Haskell you can say okay I want to have this flag that you can toggle and for example if we have a flag debug we can describe it, provide a default and then throughout the file we can write conditionals like if this flag is enabled then do this and this option is configured and so on. So it's like a very simple language with just if sentences and nothing more. And this way this gives you the flexibility of saying for example if you have a library do we want HTTPS support or not but there are DAO sites also in Haskell for example at runtime once a package is compiled there is no way to know which flags were used. So you just don't know that. And also for example you can say if HTTPS flag is enabled then add these dependencies but it also works the other way around if for some reason those dependencies are in the environment that flag will be enabled by default. So there is some magic and they also have problems and one thing you learn in packaging is that features are really problematic once you start introducing them you have to support them and these kind of things are really really painful on the long run. And in Python we have the PEP 508 which is environment markers. So for example you have a dependency you can say this dependency is only on Python 3 and Windows for example and so on. This is already supported in PEP but not many people are using this because they don't know about it. And the idea is that you don't write in Python if imperative code is saying if we are on Windows you just say this dependency and the marker is Windows and you are done. And this gives everyone else the possibility to also get this information to parse this marker and to do something with that information. And I'll talk about later what we are doing with that. So Hackets is the Haskell Python packaging index. You publish your packages there and people can download them. So just as an example of a feature where it's really painful to support on the long run in Hackets you can edit the cabal files in place through the website. So that means if you release version 0.1 for example somebody can go and edit that cabal file and remove a dependency. And then it's not really 0.1 anymore. It's a whole new thing. It's slightly modified but it's still not the same thing. So in that case the Hackets will add this revision to a line to cabal file. And when you start to think about, okay now I have this local process where I release author and then I can also edit it online. But then what happens if I bump this revision and push it to the Hackets and so on. So there is a lot of stateful things going on suddenly and while this might be a good idea and maybe some like 1% of people want it for everyone else using the Hackets to download packages and to figure out the state this is really, really problematic. Especially if you want to have reproducible builds once you edit that file the hash changes of your tabal. So all the people that say, okay download this file and this is the hash they will suddenly get a mismatch. So we really don't want to enforce a culture where you just say okay it's a new hash whatever because then there is really no point. So these kind of features are also present in Haskell and they're also present in Python which they give us headaches every day. And the API then for in the Hackets is that you can have a revision and 2.cabal and you can get these revisions. But you basically have two versions. First you have a version and then a revision and it just becomes an item handling those. So Haskell is one year older than Python and they've also had this path of improving the packaging ecosystem. And since about until two years ago they had this problem where well in your cabal file you had to specify the dependencies. And we all know that not all software packages work well together. And in case of Haskell because there are types you would get a new package, the types would change and suddenly your usage of that package would not wouldn't work. So Haskell wouldn't compile. And this is the biggest problem they had. It's called cabal hell. So when you would start, when a package would get a new version things wouldn't compile you would start putting in these constraints and so on. So every developer would do this for himself or herself. And it's just a big waste of time trying to figure out which packages really compile. So I'll talk about how Haskell solved this but just an interesting thought how Elm which is another functional language solved it they basically said in your dependencies you have to say always specify the limits of the major version. So if you say I depend on package HTTP it has to be between version 5 and 6. And then if you uploaded that package and API changed it wouldn't allow you to upload it unless you bump the major version. So it's basically forcing the semantic version at the package. So the package manager forces you not to change the types, the signature unless you bump the major version. And that's really nice. We cannot do that in Python unfortunately because there's no way to really check if an API changed. Well of course we could parse the APIs and so on but that's the gray area there. Hopefully something we will be able to do one day. So just how Haskell solved that? So they solved it actually it was released in 2005 so just one year ago called Stack Edge. So Stack Edge is a stable source of Haskell packages we guarantee packages built consistently in past tests before generating nightly and long term support. So what does that mean? So they built a site where as a maintainer can log in, you specify some of your information and you say okay I'm a maintainer of these packages on Hack Edge and then they go and they pick a dependency tree of your package and build it and see if all the tests and everything passes and then they say okay we use these versions and these versions compile and then they provide an API for that. So you can get those versions. So if you think about it in Python we have requirements.gxt but everyone has their own set of versions. In Haskell they pretty much crowdsource that so they have a website where all the those versions are tested and compiled and people use that as a community effort not as something you commit to your repository and you hope for the best. And so if you want for example to have backwards compatibility you depend on Stack Edge LTS 6 and then all the minor versions 6.7, 6.8 going to you that the API didn't change but they still shape security updates and so on. And when you're ready usually the new version means a new GAC which is their compiler, the main compiler, then you're ready to go and fix those errors, compiler errors and you go to the next version. So I think that's very interesting because they're doing all the work together in one place instead of everyone in their own garden and not sure really if we could do something like this in Python because it's way more complicated than just compiling the package and saying it works. But I still think it would be worth the effort of at least having the major software that we use in Python to have these versions community managed instead of well having this work done by each individual or company. So yeah, our solution is requirements.txt. So together with Stack Edge they also released a tool called Stack which is like a wrapper around cabal so it can do more things than just cabal. And you specify a configuration file like this and you say okay I'm going to use these flags that you'll pass to cabal when I'll be compiling software. I'm going to use these packages so you say okay the package is in a current directory and there's the cabal file and that's the one we'll use to build this project and you can have multiple of those. So if you think about how Python does that you have to say pip install minus e dot or something like that. So that's imperative. You have to actually run that and when you have a new if you develop on two packages you have to run for both of them. And in this case it's declarative. You open that file and you know what packages are being added. There is no imperative steps instead of then saying just stack build and that will execute the whole thing. So it's way more declarative. And at the bottom you see the resolver. This is where you get this big set of pinned version and you say LTS 6.7 and there you go. You have most of the packages pinned down and you're sure that those work. And there's also a field called extra dependencies and those are the dependencies that are not in the LTS. So not everything is pinned down. It's a community effort so of course if people don't do it then it's not there. So for all the packages you have that you don't that are not part of the LTS you can specify there. And stack will complain if you don't do that. So it has a bunch of simple commands like stack setup is something like virtual environment for us. It will download a compiler and it will set it up for you based on the resolver that you're using and so on. And stack init will generate the files. It's like a mini templating for starting Haskell packages and so on. So that's what stack does and the community was really, really happy when this happened. A lot of problems went away. So now that we have this package with all of packages and stack edge as a set of files, a set of versions, then you know, my job and what I'm doing is okay how do we distribute all the software to the users so that they can really get this seamlessly and it works for whatever the platform. And we're doing this with Nix. It's a functional language. It's based on the PhD thesis by Elko Dostra. And it's a very short and nice thesis I recommend it to anyone who cares about packaging and how the functional language concept can change the thinking dramatically and improves a lot of things that we have problem with today. So this is for Haskell this is kind of the stack that we have. And Nix packages is then a collection of Nix expressions that specify how some software should be built similar to APT or something else in the distributions except that we're not tied to a Linux distribution and we support Darwin and Linux. So why you would need this layer on top of the upstream Haskell package or PiPi is because we take care of system dependencies. We have a build system that will compile this packages and provide binaries for you. And we have a really powerful API which you'll see later so that you can actually go there and change those packages and tweak them in a way that you want. Apply some patches, burn versions or whatever you want to do. So we're not the upstream that you just have to say, okay, either you use what we have or it's nothing but you have the power of changing that. And most importantly in Nix packages we have all the Haskell packages there. We don't compile all of them. We don't because that's a lot of power and disk space that you need. So we take only one G8C version and for that compiler which is the latest stable one we compile all the packages or most of them. But theoretically we could distribute all the binaries and so on. So the user can then say, okay, I have this project, I have this packages I want binaries and it will, you know, the Nix package manager will download that and there you go. You didn't compile anything except your package. And that's really nice especially because you can share it between Darwin and Linux. Okay, so how does that work in Haskell? How do we get that done? And why is it so hard for Python to accomplish this? So this is the simple infrastructure that we have. So let me explain what's really going on here. So in the left upper corner you see Hackets, that's the API that has all the packages. And then there is a script that goes and downloads all of them, calculates the shas and everything and commits that in a repository. So you have a Git repository that's called allcabal hashes and you have allcabal files there. So you can go through all of them and parse them and generate dependency trees and so on, whatever you want to do. And then those hashes, those cabal files are taken and they're built into Stack Edge Nightly and that gives you a view of what builds currently and not. And that's a continuous process of course. And then based on the Stack Edge Nightly when things kind of look okay, they make this LTS Haskell, which you've seen before, and that's kind of like okay, this kind of compiles all together now. Let's take those versions. So this is like the Stack Edge and the upstream that Haskell provides. So then we have Hackets.Nix that parses the allcabal hashes repository and the Stack Edge repository. And generates Haskell packages.Nix and it generates configuration LTS.Nix. So in Haskell packages there is every version of every package specified how you should build it. And this is all generated from the cabal files. It's one-to-one mapping. Some features in cabal we don't support. Some features we do. There is room for improvement, but in general it works. And the configuration LTS basically just says, okay, based on the LTS version and the long list of version dependencies pick those versions to be the default ones when you use these Haskell packages. So there it's just basically pinning in Nix, okay, take these versions. Because in Hackets packages it will always use the latest version, which is as I've said before, not always means that things will work. So then there are two more files, configuration.dash and configuration.gac.x.y. So those are the files that have to be manually crafted and maintained. And in there, if the cabal file for example doesn't have specified system dependencies, in there we will override and say, okay, for Packet HTTP, you know, also take this system dependency and so on. So basically everything that's not in upstream cabal file we will override there. And in configuration.gac we will do that based on the GAC version. So some GAC versions may need different flags or disabled tests because they don't work and so on. So those are the two files that we maintain and everything else is upstream provided by the Haskell community. And then you have this cabal.nix in the middle and this is what the user gets. So when you have your project, you have your cabal file, you say cabal.nix, you run it and it will generate an expression automatically out of it specifying all the dependencies. And in there you can say I want a specific LTS version or I want the latest packages or whatever. So this is all, as a user, you just run cabal.nix file and you get basically the whole set of dependencies that you know that are going to work. And the cabal call.nix file has this function called package overrides where you can basically overwrite anything from the upstream. You can say take this package, but different version, take this package, but apply this patch or whatever you want. And then you install this software and there you go, you have binary distributed Haskell pipeline. All right. I hope that was not too fast and it's clear enough. All right. So this is probably the hardest slide, but I would really like to say a few words about the infrastructure in X and how these files all work together and it all fits on one slide. It's just not that easy to explain. So basically what we want to do is some kind of inheritance. We have different files and we want those files to override each other. We want this powerful overriding mechanism. So at the top you see a function called fix and that's a fix point. That's how you do recursion in functional language. And it's basically calling itself. It's a recursive function that just calls itself. And how it works is it takes the output and it fits it into the input. And because the language is lazy, it will do that only until you reference something. So as for example, in the middle I define something you would call a dictionary. It's called an attribute but it's pretty much the same. And you can say, okay, I have an attribute foo that is the value foo and bar with bar, but the foo bar is actually self.foo and self.bar. But that self... That self is really just the input of this function. It's a lambda function. It gets self as a parameter. But that self is actually the output of itself. So it will actually then reference self.foo and self will be the same thing and it will reference the foo and get it back. So it's just recursion and a function. Nothing really fancy. And when you call fix point on this function on this dictionary and you ask, the foo bar, you will get the value foo bar back. And you will just basically call it twice. And this is a way how we do dependency and how you can reference different things. Okay, so now that we have that we want to have a little bit more of flexibility and we define a function called extend. I won't go into how it works, but how it's defined. But if you look at the override, that's the API you get. And this override function accepts two things. Self and super. And self is the input and super is the output of this dictionary. So we have the power to get the previous configuration file and either reference its inputs or outputs. So you have both things. So in this case I say, okay, take the foo, take the output, super.foo and reverse that. So if I call then fix extend d and the override, so that means extend the d dictionary and override it with this function you will see that foo bar value is different because we have reversed the foo. And that gives us the power to override the dictionary to the top either by inputs or outputs. And if you call it twice, oh, it's not seen here, but you will get foo bar back. So this gives you, this gives all the power to override this file. So, okay, how do we use that? This is then all that you need to combine all these files. You say, first I have a fix point which takes care of the recursion and then I take all the Haskell packages, the common configuration file you have seen before, the compiler specific config, the packet set config and then at the bottom all of the overrides where you can hook into and you can change everything from the upstream how it's built. So in Python currently, we manually edit files. Why? Because of this problem we have a set of high scripts and you have to run all of those scripts to actually get and figure out what's going on. So someone would need to take that and for everything in Python package and index generates on JSON file or something with all this information that we could then use to generate an automata of this. And we would need to maintain the requirements for global Python package and index. So these are the two big projects that one would need to tackle in order to have the same infrastructure and then we would be able to build all the whole Python package and index basically and distribute it to people. Well, the first one the first problem is kind of being solved and communities is trying to get there but we still don't have a way to do it today. But the infrastructure is improving. We got wheels, we're getting a new Python packaging index called warehouse which is going to be tested and easily changeable and so on. So everything around is changing but this is still not doable today. And with the build system that I talked before we'll be able to have different tools and just set up tools to build Python packages and hopefully one day we'll have a standard one that will be statically based instead of a script that you have to run. And as for the second problem, I don't know currently if anyone is solving that, crowd sourcing the versions, but it's definitely something that we'll have to solve it ourselves or someone will have to do it for us. So Python is actually doing quite good in the sense that it has all of these things are being worked on and so on. But one thing that's really missing is if you think about it, that it's still not declarative enough. We have so many files that you have to touch. You have to touch the setup py, setup.toji requirements manifest. Now the py project is coming, talks.ini and it's just a lot of different things you have to set. And in Haskell there's just two files, the cabal and the stack. And it's really hard to get rid of these because this is our legacy. But it's a lot of information people have to know to actually use it. And this is improving, but it's still an ongoing process. All right, so this talk was based on the p3c months inside of the next package, the Haskell infrastructure. If you want to see that talk, it goes into the details, how it all works. And I hope that you've seen what are the current implementations. And at the same time I would still like to tank the Python package during the authority and everyone who's working on improving the ecosystem. It's really hard to have 25 years of legacy and just replace all of this and say, okay, we have this new thing, it's going to work out. And it's going slow, but there's progress. So thank you. APPLAUSE So we have time for questions, right? Thank you very much, everyone. Someone wants to ask a question now? No? Thank you. Any questions? Okay, thank you for coming. APPLAUSE
Domen Kožar - What Python can learn from Haskell packaging Haskell community has made lots of small important improvements to packaging in 2015. What can Python community learn from it and how are we different? ----- Haskell community has been living in "Cabal hell" for decades, but Stack tool and Nix language have been a great game changer for Haskell in 2015. Python packaging has evolved since they very beginning of distutils in 1999. We'll take a look what Haskell community has been doing in their playground and what they've done better or worse. The talk is inspired by Peter Simons talk given at Nix conference: [Peter Simons: Inside of the Nixpkgs Haskell Infrastructure] Outline: - Cabal (packaging) interesting features overview - Cabal file specification overview - Interesting Cabal features not seen in Python packaging - Lack of features (introduction into next section) - Cabal hell - Quick overview of Haskell community frustration over Cabal tooling - Stack tool overview - What problem Stack solves - How Stack works - Comparing Stack to pip requirements - Using Nix language to automate packaging - how packaging is automated for Haskell - how it could be done for Python
10.5446/21128 (DOI)
So welcome to the next talk of this session. It will be about Python in gravitational wave research communities. And ladies and gentlemen, please welcome our speaker, Elena Cuoco. Thanks. Good morning everybody. Thanks to be here, to my talk of Python in gravitational wave communities. Before starting, something about me. I am a physicist working as a data scientist at European Gravitational Observatory in Italy. I'm a member of LIGO Virgo collaboration. And I'm also the scientific coordinator of the European project Graviton, which has the aim to train 14 PhD students in Europe. I am also a machine learning passionate, so during my free time, I participated in cargo competition. And I am also a science outreach passionate. This is me while dancing with Michael League during an outreach event at Virgo site. Why gravitational waves? You have heard that this year we gave the announcement of the first detection of gravitational waves. A new year has just started. In September, we made the first detection of gravitational waves. In December, we made the second detection of gravitational waves. So I'm here to try to explain why gravitational waves in this event was so important. Spoiler alert. Sorry, but this is a spoiler of the keynote talk of tomorrow, Michael League and James that is here. Some warnings about my talk. In 14, 5 minutes, I will try to explain everything on gravitational waves. So it will be almost impossible. So you have a question, I'm here during these days and also today. This talk is meant for beginners, but I can avoid to introduce technical details while explaining everything. Why are you at the Europitone? Because we use it also to achieve these results. In every day working activity in our labs, we use a python in control room doing senior processing, controlling our system. I'll try to explain what are gravitational waves and how we detect them. I'll try to point to all the python usage we made in Virgo and LIGO and the fortuities are not the least of the python usage. So let's start. What is the challenge, first of all? What are the gravitational waves and how we discover them? So in 1915, this guy that might be you, Albert Einstein, introduced the theory of general relativity. He said important things. He said that the geometry of a space-time is linked to the content of mass and energy of the space-time. So there is this strict relation that is expressed in this beautiful formula that links the geometry to the mass, in some way, mass energy. So just a little joke if you want to play with me. So I need some volunteer. Don't be afraid. You have only to keep in your head the space-time. Please come here. So come. I have the space-time. So please come on. I need the other one. So keep it as flat as possible. So this is the space-time in some way as you can think. And in the absence of any mass, it is flat. But if you have a very massive body that is in the space-time, it becomes calm. This is what Einstein said. If there is a present of a massive body in the space-time, it curves the space-time set. But I will try this experiment. If there are also mass that moves in your flat space-time, you can see that maybe you not. They can see. But there are little little that are created in the space-time that moves in the space-time itself. These are the gravitational waves. Thanks. And that's what we are trying for many years to detect. The research of gravitational wave detection started many years ago. Einstein wrote this article just 100 years ago. The problem of the gravitational waves is that they produce a very tiny effect in the space-time. So the challenge is to detect this small effect. But as said effect, the fact that they interact so little with the mass can help us in understanding much more about the universe itself. Because they can bring information that otherwise we cannot access. So since they are so small, so tiny, we should think to astrophysical phenomena. So that massive body that I showed you should be very, very big. So we should think to start, very massive start. I show here the main source of gravitational waves that we expect from. There are the rotating neutron star, the so-called pulsar, that while rotating around its asymmetric, symmetrically around axis can produce gravitational waves. There is the violent phenomena that are called the supernova, and there is an implosion of a big star, and during this phenomena it can produce very intensive gravitational waves. Then there are the event that we detected that is the black hole colliding, the collision binary, the compact collision binary, and I will concentrate my talk around this phenomenon. And then we talk also to the existence of a gravitational wave background, the one that is remnant from the Big Bang. So this is a simulation of the phenomena that we detected. These are two black holes that rotating one around the other, and while they rotate they become closer and closer because they are losing energy, and at some point the gravitational attraction is so intense that they collide. And that was the event that we detected in September. So why we were so confident that the gravitational wave exists? Because we are looking for them for many years. Because in 1993 these two guys won the Nobel Prize because they proved in indirect waves the existence of gravitational waves. They observed for many years to a binary system, the energy loss by this binary system, and estimate the quantity of energy that can be lost as the gravitational waves. And these red points are the measures they made while the blue lines is the prediction. And as you can see the fit is almost perfect. So we know they exist and we try to detect them. I will skip the first experiment and I will concentrate on the recent experiment. How we can detect them? We can use the effect that they made on a free mass that they can eat while they pass through. So this is a simulation obviously. And while a gravitational wave eats somebody, it can stretch in one direction and elongate in another direction. This is a schematic effect on a tennis ball. If we think to some test mass put around a circle while they are invested by this gravitational wave, it can start oscillating. And we can detect this small difference in length about the length itself. And this is what we call the strain of gravitational waves. The problem with this strain is that it is very, very small. It is the order of 10 to minus 31. Just to let you understand which are the dimensions we are talking about. The diameter of a human hair is 10 to minus 55 meters. The diameter of an atom is 10 to minus 10 meters. The one over the nucleus is 10 to minus 14 meters. The diameter of a proton is 10 to minus 15 meters. We are trying to detect a small displacement that is 1,000 over the dimension of the diameter of a proton. So this was our challenge. And now which tool we used to detect this small displacement? We used a module interferometer. We think to this schematic position of some test mass. And we try to detect this small displacement using this oscillation of the test mass. And take advantage of the phenomena of the interference of the laser. I will explain better using a video that one of our colleagues produced. This is a laser that was sent through a splitter. So the first mirror that it covers is the device, the laser in two directions. The one that are the two perpendicular arms of our interferometers. This laser goes back and forward. While the gravitational waves hit the interferometer, it starts moving the two test mass. So we can see light here or don't see, following the movement of this mass. So this was the idea. We can detect this small movement by looking at the light that appeared at the end of the interferometer. But this is a schematic view. Now we add an extra mirror in both beams. In this configuration. Sorry. The experiment is much more complicated than the one that I showed. Obviously to have better and better sensitivity. We want that this laser make many parts of you in what we call cavity, better, but not cavity. So the light become more intense. The part that the laser made is longer. So the sensitivity at the detection bench will be higher. Then there are further mirror that are part of the optical setup. That are passed to clean as much as possible the laser itself to make higher the power inside the interferometer. More or less this is the artworks. But this is the idea. The real world is that we have a noise that is much more higher than what we are looking for. So we know that many other things that can move this mirror that are not gravitational waves, but are seismic noise that are thermal noise that are you also to the hair that the laser can meet during the study. So we should take care of reducing as much as possible the noise. We can do this from experimental point of view. So while projecting our detector or when we analyze our data. So I will show how we do this. The first accuracy that the first care that we put in reducing the noise was to reduce as much as possible the seismic noise. So you see our heart is continuously moving. So all these optics were shaken by the seismic noise. And we want that the mirror should be at keep at rest at the most as possible. So we use this instrument that we call a super attenator. The mirror is a hunger to this chain and all the optics in a bill go but also in like a detector are suspended in such a way that they can be can be considered. In the same place as much as possible. So we know that what we see is not seismic noise at least at some frequency. One more thing that we made is to put all the optics in under vacuum because we want that the laser go through the cleanest possible path. So all the optics are were covered put in a tank that this is the real this is the tube the vehicle tube. And we are here we have one in Europe one of the biggest the biggest vacuum experiment and the vacuum experiment much more than the set. And the other thing that we have cured is the thermal noise. We know that all the components all the optical components components can cause thermal noise because the molecular that compose our component can move due to the temperature. And this can cause thermal noise. So we take care of this building mirror very special mirror. These are an example of the advanced big mirror. These are made of a particular material. These are silica material. They are very heavy and very big because they have to be eaten by the the last are many times. And with this we try we try to keep under control the thermal noise. This I will show now how to look slide reality Virgo. This is a video made by my colleague with the drone. This is Virgo. This is in Italy. This is closer to Kashina. This is what we call the male building all the optics all the injection laser the injection laser is here. These are the two arms of the interferometer the three kilometers arms at the end of this tube. You find the the two and mirror and the laser go through this tube go back and forward and then the recombining the central building. Sorry. Okay. So let's come to the main topic of this talk that is Python in gravitational wave communities. Why I'm here talking of Python because as I said we use Python in many many fields of our research. As I said the Virgo and like are very complex instruments. This is a schematic view of the optical scheme that I showed before the two and the mirror the input mirror the injection laser and we as I said there are. To control a lot of noise that can make worse our sensitivity. At the end we came out with what we call a sensitivity curve so we know how much we are sensitive to gravitational waves by looking at this curve. If we want to detect the event that this should be in some way higher than our noise so we can do prediction estimation of our noise and see how much we are sensitive. One of the tools that we use for example is the simulation we have some optical simulation to know how the optics behaves. This is what's written in Python. This is a pie cut pie cut. The I put everywhere the link that you can go there and have a look at this is the first set of Python. But as I said we want to control as much as possible everything in our interferometer. So the mirror we have to keep the interferometer locked at this working point. So we don't use only what we call passive control of our optics but also active control. So there are many things that are used in control room and they are based on Python. This is a short list of what we use in live in a video. We started also to write a documentation of all this package and many of these are used daily in our control room. So there are automation of the locking procedure that we're done using Python. But now came to that. How we can extract a signal from our noise because this is the point. We build an instrument. We know that we can detect gravitational waves using this instrument but how we can make this. What is a first of all gravitational detection? We have noise. We have a surface signal and we know that this signal that are time serious. We have hidden a signal and that is much smaller than the noise itself. We have to extract this. The astrophysical source that I showed at the beginning produce different signal. The rotating neutron star produce what we call continuous waves. These are periodic signal. These are present continuously in our data with a given frequency. This signal should be there for all the run that we made while acquiring data. This signal is there continuously. While there are other signals that we call transient signal. There are very short transient signal that are due to the supernova events. We have a lot of release of energy in very, very short times. The order of millisecond. There are the collision binaries that are always transient because they start with this rotation one around the other and then collide. This can last from some millisecond to some second. It depends on the mass of the phenomena. We have what we call broadband signal that can be due to the stochastic background. This is simply a noise in some way that should be different of some way below our signal. It is very difficult to detect. In the ideal world we have noise and signal that are summoned. In the ideal world our noise is good. It is nice. It is Gaussian and it is sarsional. In this ideal world there exists an optimal filter to detect the signal. So a bit of formula. Obviously I won't go through the rationality of this formula. But the idea is that with this characteristic we know what will be the best way to extract the signal. So if this is the data that came out from our instrument, we have the noise and the hypothetical signal, we try to match our data with a template. We, for example, I am talking of a collision binaries, so theoretically how it could be this signal, the way formula that is signal itself. So we try to match a template with our data weighted by the power spectral density, by the noise. This is a formula that is derived in a perfect way from mathematics. What we can do? If we have a signal, a template, and we do this method, we can say that we found a signal if this quantity is above a threshold. Maybe it is clear with this simulation. This is the signal that is hidden in the noise. This is our template. This is moving along the data. While it encountered the real signal and it matched the exact way formula, we can see this peak. We detect a signal. So if the match is perfect or almost perfect, we know that we have a trigger in some way in our data. Also, this pipeline, some way, was written in Python. This is the documentation of our LIGO colleague and all these codes in GitHub, so you can go there and have a look at the code itself. The idea of this mechanism to detect the signal is to build a template bank. So we know what is the waveform that we are looking for, but we don't know the parameters of this waveform. The parameters are linked to the mass of the binaries, to the position, to the fact that the stars may be spinning. So we have a very large parameter space that we have to span to find the exact template of our phenomena. So we can, in some way, simulate the signal to produce this template. And also this was done using Python in some way because we have a C library that we're embedding in Python. This is called PyLal, and we simulate the waveform using this library. We estimate this important quantity that is the signal to mass ratio for some of you that are doing the signal processing know what it is. It is an estimate of how your signal is higher than with respect to your noise. You know how intense is your signal given in some way this quantity, that is the amplitude of the signal self-weighted by the spectral density. Why I'm introducing this? Because when we build our template bank, we say, okay, we build a template bank taking account of the fact that we don't want to lose much more than 3% of its signal to mass ratio. So for example, for the detection of the event in September, we end up with 250,000 waveforms. So you can imagine how many times this method field was done to produce the real signal. And this is the parameter space that we can span using this number of waveforms. So till now we'll talk about the signal, the instrument, and the way in which we can extract the signal in the real world. But the tecton noise is not so ideal as we want. It is no stationary. That means that it is not the same while passing the time. So after some minutes it can change. It is not Gaussian, so the distribution of this data are not a perfect Gaussian distribution. And it can be contaminated by the presence of many spurious events. There are many things that can mimic and survey gravitational waves, because as I said, the supernova could produce a gravitational wave. We don't know exactly the waveform, so it's simply a glitch that we can see in our data. So we should take care of cleaning as much as possible our noise before they try to detect something inside it. There are many packages that rely also on Python, which we use to do this procedure. We use GW-Py, GW-Py software, like the chart, PyLal, and PyNAP. There are algorithms that we use to clean the data. And this is important because what I show is the example in which we know the waveform. But what happens if the noise is not as ideal as we want, if we don't know anything of the signal itself. So we should use what we call trigger generator that are generic. So we look for a transient signal in our data that are simply find excess of signal in the data that can be due to different sources of noise or of signal. By the way, the first pipeline that triggered the signal in September was one of these generic tools. This is called the Quirrent Wave Bar, and it's based on the composition of the data. And this is how it looks like that signal. So these different pipelines are important for noise characterization, because we use this pipeline, so generic one, just to find the different glitches that are in our data. This is what is called a glitch gram. So the number, the glitches that are present in our data. So many of these, you can think of are signal, but obviously they are noise, and we should identify each of them to be sure that our signal was not one of these. We are a network, we are a LIGO instrument to detect the signal, but we are a network. There are the two LIGO in USA, Virgo in Italy, J.O. in Germany. There is Kagra that is almost operating. There was approved LIGO India in the next year, but why we are a network? Because the gravitational wave detector is not as the standard telescope. You cannot point your detector in some direction of the space to look for some signal. If you want to know the position of your event, you need to do the triangulation of the results. You can consider where the interferometer are, the traveling time that the gravitational waves were detecting the different detector. In this way you can, in some way, have information on the position of the source. This was in some way the error of the position in the sky for the event we detected. To do this kind of localization in the sky, we use always Python some way. Here are some links where you can find the notebook and the tutorial to apply this estimation of the position. Why we are a network? Because as much more detector are, much more will be the precision of the estimation of the position of the source. When the also Virgo Kagra will be operating, this big area can become this small area in the sky. That's why it is important to be a network. So, come to the event. The gravitational wave has been detected. So, for many years, it seems almost impossible, also for people that are working in this field. And I'm sure that many of us, when they have alert, no one believes that it was a real event. Because we were very surprised by the fact that it was so beautiful. As I will show you, it is almost perfect to the prediction. So, we have this guest star of 14 September, but also in December during the first scientific run of LIGO, there was another event that was detected, always black hole colliding. This is the event. In three minutes, after the data acquisition, we have the alert. There was an email that's going around and say, okay, there is a strange event in our data. Please have a look at the data better. And it started the procedure that let us give the announcement in February. So, these are the waveform of the two, of the event seen in the two different detectors in Hanford and Livestone. This is how to look like in time domain. The line, continuous line that is superimposed is the prediction, the template that matches the best this scene. So, as you can see, it's almost perfect. And I want to show you, this is the famous chirp sound. You can hear. Now, there is no sound. Wait. We call this kind of senior chirp just for this sound that you can hear. Because there is this frequency that is became higher and higher during the time that produce this beautiful sound for our. The one that detected, we detected in December was a bit different, always two black holes, but with a smaller mass. So, it was detected directly by a pipeline that is based on metric filtering. The waveform is always the same. This frequency that change in time with this big peak at the end of the phenomena. So, this is our few numbers about the detection. The first seven has a very big signal to noise ratio. That was why it was so evident in our data. The second ever has a little smaller signal to noise ratio. These are the distance. So, 1.3 billion light years and 1.44 billion light years. The solar mass, as you can see, are different for the one that are very, very big. So, these are very compact black holes, 36 and 29 for the second event, 14 and 7. There was also another event that were considered candidate event during October. But the statistics were not so good to make us claim another detection. So, coming to my personal experience working in this field, I am a signal processing researcher, so I am a data analyst, and I work mainly for noise characterization. So, I am one of the persons that cleaned the data before the detection. In Virgo, we developed this library, that is a noise analysis package, it's a C++ library, that we're embedding in Python using Swig, and now we have this fine-up, generic noise analysis toolkit. I developed this event trigger generator that was based on the wavelet that can be used just to detect noise. And then in the last period, also trying to use machine learning tool to classify the noise signal. This is the environment in which we work, using Python, Psykitelina, PNAMPY, and all of this, just to show what we did. These are the typical output of our detector. So, these are the data that came out from the detector. These are the same data after the cleaning, the whitening, the so-called whitening, and you can see that there are these two peaks here, that in time frequency appear in this way. So, the idea is to have a look to this kind of waveform and classify the signal, the noise signal in some way, and we did this using machine learning technique that separated the signal in different class, just trying to fit in different way the waveform of the signal. So, in the app start, I said that we can have a look to the data. There are LIGO produced, this LIGO open-sense center. You can go there, this is the link. You can download the LIGO data. You can play with this LIGO data. There are beautiful tutorials. Now there are two beautiful tutorials because they did the same with the second event, where there are some signal processing technique well described there, and you can play and see how it works. Maybe we can try now. I prepared a short version of this. So, hoping it works. I downloaded the data on my PC. So, you can recognize there many of the Python packages that maybe all of you use. And plus some signal library of SAP that were used to prepare filter. The data we're preparing in the format HDF. While in LIGO we use a different format to save the data that are in what we call frame data. I've been time. And you can also load the simulation waveform that was simulated. You can have a look at the data. This is how it looks like your data. These are some seconds of data around the event. So, in this data, the two detectors, two different detectors, somewhere is read the event. So, you can see that it is impossible to identify some events here. But if you use some senior processing technique, for example, the so-called whitening, what is the whitening? These are the data, the power spectra density of the data, that is similar to the sensitivity curve that I showed at the beginning. We call this noise. So, our noise is not flat, it's full of features, full of lines that are due to many source of noise that can be due to the 60 hertz power in LIGO and the 50 hertz power in LIGO. Because in Italy there are many lines that are due to the thermal noise, the movement of the wire that suspend the mirror. So, some of these lines are well identified, but if we estimate this power spectra density and apply the whitening, that is the inverse procedure of this power spectra density, we divide in some way the data by this quantity. Just do. And then plot the whitening data. This is our event. So, just simply whitening your data, without doing strange things, you can identify your event. These are the two strains, the different detector, and in black there is the simulated mached waveform, the one that triggered. So, the same if you look at the time frequency domain. I don't know if how many of you know these terms. These are outlooks like the time frequency plot. The senior was there, but you cannot see without doing anything to your data. But if you apply again the whitening and you produce the same plot, here it is, your senior. So, I don't know if it is so evident also for you, but yellow. This is the so-called chirp in your data. So, I'm almost done. The data are now also on Kaggle. I don't know if some of you play on this platform. So, there is some portion of this data. There are some scripts that you can use directly there without downloading the data. If you want to create your Python script or in the script language that you prefer, you can play there. And that's it. So, we might have time for just one question, one short question. Before lunch, we have one. The question, basically the question is, what is the gravitational wave on a physical level? What is the relation? Gravitational wave at the physical level. Sorry? What is the gravitational wave at the physical level? I think they are at the beginning of the talk. Yeah, at the physical level. I don't know if you missed the first part of the talk. I show what are the gravitational waves. It is a very tiny oscillation of the space-time. So, your space-time is some ways moving. While also now I am producing gravitational waves because I am a mass. I am moving. And while I am moving, I am not symmetric because you need also that your mass are symmetric while moving. And this can perturb your space-time. It's any more flat, but produce this small oscillation on your space-time itself that propagates through all the space-time that can reach your earth or your self. Okay, I have one question. Very quick. Is it true that the first event was detected when they were basically still testing the... This is true. So it was not operation? No, it was operation. It was during what we call engineering runs. Before the starting of scientific runs, usually there is a period of some days before that we used to test that everything is working. So we acquired data, but we officially are not in science mode, what we call science mode. But it was during... everything was running as it was in science run because the pipeline were under test. And we saw this. So basically you were expecting to see the first event in one year probably and you get it before you start? No, we expect to be honest, we expect that with the LIGO sensitivity, it was probably to be detected in the event this year, but not as fast as we did. It was real and expected for us. Okay, thank you very much again. Thank you.
Elena Cuoco - Python in Gravitational Waves Research Communities On February 11th 2016 Ligo-Virgo collaboration gave the announce of the discovery of Gravitational Waves, just 100 years after the Einstein’s paper on their prediction. A brief introdutcion to data analysis methods used in Gravitational Waves (GW) communities Python notebook describing how to analyze the GW event detected on 14 September 2015. ----- On February 11th 2016 Ligo-Virgo collaboration gave the announce of the discovery of Gravitational Waves, just 100 years after the Einstein’s paper on their prediction. After an introduction on Gravitational Waves, on Virgo Interferometric detector, I will go through the data analysis methods used in Gravitational Waves (GW) communities either for the detector characterization and data condition or for the signal detection pipelines, showing the use of python we make. As practical example I will introduce a python notebook describing the GW event detected on 14 September 2015 and I will show a few of signal processing techniques.
10.5446/21129 (DOI)
Okay, so welcome back. Let's find out if monkey patching is a magic trick or a powerful tool. Welcome, Elizaveta. Hi, my name is Elizaveta Shashkova and today I want to tell you about monkey patching in Python. In this talk I'll try to give the answer to the question, is monkey patching just a magic trick or is it a really powerful tool? Just a few words about me. I'm a software developer in the JetBrains company. I work in the PyCharm IDE and I develop Debugger in PyCharm which is also used in PyDev, the Python IDE for Eclipse. Unkey patching is a dynamic modification of a class or a module at a runtime. Here is a simple example which illustrates what monkey patching is. The standard module math has a square root function which returns the square root of a number. But if the number is negative, this function raises an exception. We want to change the behavior of this function. So what can we do? We save the original square root function to the attribute original of the module math and replace the square root attribute with our new function, save square root function, which doesn't throw exception for negative numbers and returns the not a number constant, which is defined in math module. So what happened here? We've got the standard module and changed its behavior inside our program. That means that monkey patching allows us to change third party code without changing the source code secretly. It explains where does the term monkey patching comes from. It believes that it has come from the term guerrilla patch. Guerrilla is a member of group of soldiers who don't belong to a regular army and who fight in war secretly. This term is referred to changing code secretly and possibly incompatibly with other such patches. It's believed that later this term was transformed to guerrilla patch, which sounds similar, and later it was transformed to monkey patch. As you might already notice, all these terms has rather negative meaning. But despite of it, monkey patching exists in a number of other programming languages. For example, in Ruby, you can redefine any method in any class, including the standard modulus like string or array. In Ruby, it's named reopen the class. In this example, we replace the standard up case string module with the reverse method. As we can see, the monkey patching is Ruby is almost a part of the language philosophy. Because there is even a special simple syntax for doing this. So what about Python? I believe that everybody here remembers the Zen of Python and one of the most important statements in it. Explicit is better than implicit. Monkey patching doesn't satisfy this requirement because instead of raising explicitness, monkey patching makes Python code less readable and more difficult for understanding. And even Ruby, we can patch everything. In Python, we can patch almost everything. Because on Python level, we can't patch built-ins defined in C. There are some solutions for that case, too, patching on C level, but we won't discuss them today. So as you can see, monkey patching is very interesting, too. It gives you a lot of opportunities and also impose the great demand on the person who uses it. Now, let's have, like every object in our GoLinx, monkey patching has its own light and dark sides. Now, let's have a short survey. Please raise your hand if you're on the light side of monkey patching. Well, and now raise your hand if you're on the dark side of monkey patching. Okay. I would like to start from the light side of monkey patching and consider how we can use it in real life. So, as I've already mentioned, we can change the third-party code without changing the source code. And, for example, if you found a bug inside in some library and you know how to fix it, you can just apply patch to this library and use the correct version of the library inside your program. But remember that nowadays almost every library is open source. So the best decision is to fix this bug and create pull request and share your fix with other users. The next important example of usage of monkey patching is code instrumentation. You can use monkey patching in order to add performance measurements to your code without changing the code of your project. Also, sometimes tests need to invoke some functionality which depends on global settings or which works code which cannot be easily tested. A lot of test libraries allows you to replace parts of your system under test with mock objects. And the next important example is changing the standard modules. This point looks similar to the first one, but it's not about bug fixing. Sometimes there is need to change the standard libraries for your own purposes and there are some libraries who do it. Later we'll consider such an example. It's time for examples. The first example will be from the PyCharm internals. As I've already mentioned, I developed debugger inside PyCharm. And it's very important for us to catch all the new processes created in the user's program in order to add debugger's tracing to these new processes. And monkey patching allows us to do it. Here we catch new processes created with the fork function from the module OS. We define very simple class, process manager. We take the original fork function and calls it in the method do fork, the patched version of this function. It calls the original fork function and checks the result. If result is zero, that means that we are situated in the child process, we call our debugger's tracing function, start tracing, and after return the process ID like the original fork function does it. How can we use our process manager? It's very simple. We import module OS, create the instance of our project manager class and pass the original OS.fork function to it, and after that, we replace attribute fork of the module OS with our method do fork. And after that, every time when we call OS.fork function, we add our debugger tracing to every new process. The next example is about import hooks. Almost every script starts with the import statement. And every time we're importing some module, we in fact call built-in import function. Let's try to monkey-patch it too. In the previous example, we patched the OS.fork function, but what if we want to patch some module as soon as possible? As soon as possible means right after importing. That's why we need to create import hook. This is import hook manager class and it looks very similar to the process manager class. It also saves the original import function and calls it inside the patched version of this function. But also it checks the name of the module, and if the name of the module is the string OS, that means we want to patch the object of our module. We again get the original attribute of the module, create the instance of our process manager, and replace the attribute fork of the module. And like standard import function does it, we return the object of the module. How can we use our import hook manager? It's very simple. We import built-ins, the module whereas the import function situated, create the instance of import hook manager, and again replace the attribute import of this module with our new method. And now every time when we type import OS, it will be patched as soon as possible, right after importing. And the third example on the right side is related to the G event library. G event is a framework for scalable asynchronous input and output operations based on the green light loop, a lightweight coroutine. It implements its own event loop, so even one-threaded programs are not blocked by the IO operations. That's why G event library is very useful. But there are blocking system calls in the standard library, including for example socket. And when you use such modules in your program, you break the main idea of the G event library, because your IO operations becomes synchronous back. In this case, you can replace some statements like import socket with the modules from the G event library, which are very similar to the standard modules, but which are compatible with it. So instead of import socket, you should just write from G event import socket. But if you already have an application which uses such modules, there is no need to modify such import statements in many, many places, because there is a special module inside G event, gvent.monkey. It carefully replaces functions and classes in the standard modules, and after that, they can be used together with that library. It allows Pupiles and applications in libraries to become synchronous with no modifications. MonkeyPatching has many interesting and useful applications in real life, like changing third-party code, testing, code instrumentation. MonkeyPatching is a very powerful tool, but MonkeyPatching is a very dangerous tool as well. As I've already mentioned in the beginning of my talk, MonkeyPatching violates the philosophy of Python language. Changes made by MonkeyPatching are not explicit. When you patch something, it leads to the unpredictable behavior of your program. Changes are not documented, and they may be very unexpected for people who use your code. Also, even if you documented your MonkeyPatching, sometimes you can meet people who decided to MonkeyPatch something too, and it will be very sad to realize that you decided to MonkeyPatch the same object. The examples on the dark side of the MonkeyPatching are also very important. This example will be related to the G-Event library again. In this example, I want to show you how MonkeyPatching can lead to creation rather complicated and tricky solutions. As I've already mentioned, the G-Event can MonkeyPatch some standard modules in order to easily make Python applications asynchronous without a lot of changes of code. In the docs, the creators of the G-Event library mentioned that patching should be done as early as possible in the lifecycle of the program. But what happens if user runs the program with G-EventPatching inside PyCharm's debugger? The debugger has its own event loop based on the standard modules. And after G-EventMonkeyPatching, user's program is based on the G-Event event loop. But two of these event loops should be separated because the debugger shouldn't affect user's event loop. It shouldn't break the logic of the user's program. So that means the debugger should use the original versions of modules like Frigging, Assocate or Socket, instead of patching versions. So that means that we have a problem. Somehow, we should save the original versions of some standard modules and continue to use them simultaneously with the patched versions. Let's try to solve this problem. First, let's consider how the importing in Python works. This is a very, very simplified version of the standard import function. First of all, import function is trying to find the name of the module in the sys.modules dictionary. This dictionary contains the objects of all imported modules. And the keys in this dictionary are the names of these modules, just strings. If the import function didn't find the name of the module in this dictionary, it executes the file again, creates the new object of the module, and put it to the sys.modules dictionary and returns it as a result. Let's create some module saved modules. Here we import socket module. So that means that after that, it is situated in the sys.modules dictionary. And the key socket is inside keys of this dictionary. After that, we pop the object with key socket from this dictionary. What does it mean? This object is not inside the dictionary, but we still have a link to it. And after that, when we import the socket module again, the import function is trying to find it inside sys.modules dictionary. It can't find it, and it creates another object of this module and put it to the sys.modules dictionary. And after that, we have two objects of our socket module. And one of them is situated in sys.modules dictionary, and another of them is in our secret module. So how can we use it in our program? We can import bus, we can get access to bus of these objects, but after that, when gEvent is trying to patch module socket, it is patching the module which is situated in the sys.modules dictionary. And gEvent doesn't know anything about another module, but we know about it. So that means that we can use bus of these modules, the patched versions, version and the original one together and simultaneously. So as you can see, the solution of the problem was not very easy, because we tried to fix the bug after monkey patching, and we had to create some dirty hacks with importing. So monkey patching can lead to some complicated bugs and to complicated fixes of them. Today, we consider it bus size of monkey patching. Monkey patching has many different and interesting implications in real life, like changing third party code, code instrumentation, testing, working with standard modules. But monkey patching raising the implicitness of the code. It can lead to some unpredictable behavior and surprising bugs in your program. Sometimes it may be incompatible with other such patches. I believe that sometimes you can use monkey patching in your program, but you should use it if and only if there are no other solutions for your problem, if and only if you are absolutely sure that it is necessary in your case. Monkey patching is a fascinating tool, but monkey patching is a very dangerous tool as well. As for me, I am on the light side of monkey patching. We use it inside PyChR because it is a really powerful tool. Be brave, but also be careful. Thank you. APPLAUSE OK, so now I believe that we have some time for some questions. Anybody have some? I don't know if you are allowed to do this, but can you elaborate on some examples of how you use monkey patching in PyChR? Sorry, could you repeat your question? Can you describe some instances where you use monkey patching in your work at PyChR? Yes, some of these examples were from the PyChR internals. So the example where we catch new processes, we use it inside PyChR debugger, and the import hooks, we use them inside PyChR too. And these solutions to protect from monkey patching is used inside PyChR. So all of these examples. OK, any other questions? Thank you. Thank you for your talk. You mentioned at the beginning using monkey patching for testing. I've seen mocking like the mock module in libraries. Do you use those or do you use monkey patching directly? Because my worry is that if you monkey patch in a test, then surely the next test, or maybe I'm misunderstanding, will have that monkey patched object. Some testing libraries, they use monkey patching. They're internally in the library, but not in your own tests. Yes, yes. Where I mentioned testing in the light side of monkey patching, I mean that how they use monkey patching in testing libraries. There is no need, sometimes you, of course you can do it, but there is no need to do it on your own because there are a lot of powerful testing libraries, which already does it for you. Any other questions? This may be a dumb question, but can we not solve the problem of monkey patching by not doing monkey patching by creating new objects or classes that do the same thing with the monkey patch that we want to do? So the question is, can we avoid monkey patching? Yes, back creating new objects, so we will only use those instead of changing the ones that we want to change. So for example, in the OS example, you create like monkey patch OS, and you use that instead of changing OS? Yes, in this example, we monkey patched the module OS, which was already in project, and we monkey patched the function inside it. So my question is, can you not create a new OS? Yes, you can create OS, and in this case, we should monkey patch it too. So there are some possibilities to avoid monkey patching, as we did, but this example doesn't protect us from the solution which was shown in the example of the dark side. Yes, thank you. Do you have any other questions? Okay, if no, then enjoy the coffee break, and let's thank again Elisabeth. Thank you.
Elizaveta Shashkova - Monkey-patching: a magic trick or a powerful tool? Monkey-patching is a dynamic modification of a class or a module at runtime. The Python gives developers a great opportunity to use monkey-patching almost everywhere. But should developers do it? Is it a magic trick or a powerful tool? In this talk we will try to give the answers to these questions and try to figure out pros and cons of using monkey- patching. ----- First of all we will learn what is monkey-patching in Python and consider some basic examples of using it. Of course, monkey-patching may cause some problems in the code. We will consider bad ways to use it and try to learn different types of problems monkey-patching may lead to. Despite of some bugs that may appear in a patched program, monkey- patching is used in a real life rather often. There are some reasons and motives to do it. We will consider the examples of using monkey- patching in real projects like `gevent`, in some other libraries and in testing. Also we will learn some monkey-patch tricks that helps to solve real-life problems in the Python debugger which is a part of the PyCharm and the PyDev. After that we will compare using of monkey-patching in Python to using it in an another dynamic language Ruby. Are there any differences between them? Is our reasoning correct for Ruby? Finally we will conclude all our thoughts and examples and try to give the answer to the question from title.
10.5446/21130 (DOI)
Welcome all. Here we have Eric telling us about building a reasonable popular website for the first time. Give him a clap. Thank you. Thanks. First of all, I can't get the screen for configurations exactly right, so I'll do this without my notes. Please excuse me if it goes wrong. So I'm going to talk about building a recent popular web application for the first time, because I got lucky enough into being able to architect, build, and design something that grew quite quickly and got to learn to deal with scale way quicker than I would have expected. So I learned a lot of this during this time, and I'd like to share what we learned. So hopefully you can do at least skip doing the mistakes we did and make your own unique ones instead. So who am I? Why am I here speaking? I'm the co-founder and chief architect at a company called Hotjar. Hotjar, both the name of the company and our product, is set of web analytics and feedback tools. So basically this means a lot of data ingestion. We are installed on almost 200,000 sites in the world right now, so a lot of data coming in. I'll give you some numbers later. So my development career actually started a long, long time ago at the age of six. I wrote my first game. It wasn't that awesome, probably, in retrospect, but I thought it was. So I got hooked on programming, and I've been ever since. And after that, I transitioned between different tech stacks throughout the years, but started with Python about seven years ago now. And it's the one I definitely like the most so far. So since I'm going to talk to you about something recent, popular, reasonably big, let me, it's only fair that I give you a definition of what I think is reasonably big. So Hotjar right now, we've posted around 400,000 API requests every minute. Or CDN delivers about 10 terabytes of data to our users every day. And we have roughly three terabytes of data in our primary data store. It's a Postgres. And another two terabytes in our elastic search cluster. And it's almost between 35 and 40 terabytes on Amazon S3. So that's our definition of reasonably popular, reasonably big for today. Still use reasonably standard solutions, though. Our tech stack isn't anything out of the ordinary. As you can see here, NGNX, Memcached, MicroWiskey, Python, Elastic Search, Lua, Postgres, and Redis. It works amazingly well to just run a load of MicroWiskey workers, even at this scale, believe it or not. At some point, we will of course start using all the fans in the US, Inc. IO, and UV Loop, and all these things. It's probably going to be a great match for us. But for now, very plain, process-based MicroWiskey scales really well. So now that you have some context, let me start out with what we learned during the last two years. So login monitor from day one. This is something we messed up a bit, because we only started logging and aggregating logs once we started having problems. At that point, though, we had so much log data coming in. So we had to spend quite a lot of time cleaning things up before we could actually see through the noise. So start logging and aggregating your logs from day one. And keep your logs clean. Act on the problems you see. Otherwise, you're going to have a mess cleaning it up when you need to. And it's kind of a depth as well, not managing your logs. Have a way to profile your API calls. So we ended up using SQL Alchemy as an ORM. It's great, and I love it like 95% of the time. But every now and then, you have this little innocent line of Python code that causes some really weird query. And having a way to profile both code and database queries is great. We have the concept that our super users, ourselves only, can actually append question mark profile equals one to any API call in the query string. Instead of returning the normal results, that makes the endpoint return C profile data and SQL Alchemy profile data. And having an easy way to get profile data from a live API call in the live environment in just a few seconds is actually great. It makes your profile a lot more, and you get a much better understanding of your system as a whole. So highly recommended to have a way to just add a profile, a query from a live environment. Great to do. Sometimes it's the Python code that takes time. Sometimes it's database. But you'll be surprised how often the Python code is actually, you do a little mistake in SQL Alchemy that's really having processing. So it's a great thing to do. No one thinks fail. So at some point, we had to add some cron jobs that don't remember quite for what, but some background processing. And yeah, they failed at some point without us noticing, because it was a silent failure. It exited for some unknown reason. It didn't throw an exception or anything like that, because we were obviously monitoring for that. But it just failed silently. So it's just as important to know when things are not happening as to know as when bad things are happening. So we solved this by adding the simple concept of job expectations and job results. A job expectation is something simple. Like, I expect this job to run every hour. A job result is simply a log entry from the job that it writes when it's complete. So then we basically just have a status endpoint that's called by external third-party service, and basically checks that all expectations are satisfied all the time. That way, we know that jobs run, and they run on time, and they run successfully. So always beware about safeguarding against things that fail explicitly and things that fail silently. Just as important and easy to miss. And also, third-party systems to monitor your own systems as well, because you're monitoring my fail. Have a way to keep secrets. Hotjar, as everything else, started out as an experiment of source. We weren't too diligent about not maybe keeping external API keys in source-controlled stuff. In hindsight, stupid of us, but you know what it is. Then as the development team grow, we realized, OK, maybe it's not the best idea that everyone has access to all third-party systems through APIs in live environments. So I'd recommend to use something like Ansible Vault or similar from day one. It's going to pay off. Because now we didn't. So at the time when we had to start keeping secrets, we had to change all the API keys and that in itself was a mess. So have a way to keep secrets from day one. This is an interesting one. Everything needs a limit, even if it's big. So a good example here is we have the concept of tags. We can basically tag a recording. It's used for, we envisioned it to be used for people saying, OK, this recording, the user visited the checkout page in this recording. However, our users used it slightly differently, some of them. They tagged each recording with unique user IDs coming from our third-party systems like Google Analytics. So that meant some users ended up with 400,000 different tags. We showed that in a little nice HTML select dropdown. 400,000 select dropdown options does not render well. Our interface broke terribly because we didn't have limits in it. Users are very creative. And if you give them away to put like limitless amounts of information, they will. And these limits go for UI. It goes for APIs, length of fields, stuff like that. It also obviously goes for databases, length of fields. Never, never, ever allow unlimited. Perfectly fine to allow really big. But unlimited is bad. If you give your users a way to put unlimited amounts of data in your system, they will, eventually. It took like a year, but then it happened. And another one here is slightly more interesting, I'd say, and much more surprising. Postgres or data store uses in for 32 bit ints for as a default for the ID column. We eventually hit that limit on a table. We had our two-something billion rows. That was kind of hectic trying to solve that when everything was done. Because I didn't even anticipate it. Never worked with data this scale before. But it happens. So think about when trying to design your schema. Try to think ahead. A year or two, I know it's hard, but try. Is there a possibility I could end up with, like, reaching data type limits if I use this type here? If you think you're even going to be close, choose a big data type. It's not expensive. It's just not default. So you have to make a conscious choice. But think about how your data will grow. And if possible, put monitoring in place for this as well. When you're about to reach limits halfway there, you want to know. So you have time to plan migration. Don't get too attached to a framework. Right now, we're using Flask and Flask RESTful. It works really well. We're super happy with it. But at 400,000, 500,000 requests per minute, it's starting to have a significant overhead, because most of our requests are really quickly processed. So the framework matters. This, of course, depends on your use case. But for us, it matters. So at some point, we're probably going to have to transition to something else. So a good advice to minimize the pain of doing that is to use framework agnostic libraries as much as possible. Like SQL Alchemy is a great example, because it works like it has adapters for basically everything, and if it doesn't, it's easy to do on yourself. I don't have anything against using what I like to call thin wrappers, like Flask SQL Alchemy, because it basically doesn't do that much. It's just a nice helper. But if you were to, if you switched away from Flask, you could easily implement what Flask SQL Alchemy does yourself. So thin wrappers, fine. Otherwise, I try to avoid framework specific libraries. It's kind of like vendor lock-in, framework lock-in, limits your flexibility. Choose components which allow for language interoperability. So we're definitely mainly a Python shop. But we have about 1% of our code base in Lua, actually, for performance reasons, running inside Nginx. We did the mistake of using a queuing system called RQ, initially, great system, but Python only. And this caused some issues when we basically just wanted our Lua code to put some simple things in the queue that end up being a much bigger thing now, because we couldn't put it there, because it was a Python only queue. So when possible, choose components, components, libraries, servers, whatever, that allow for greater language interoperability. It makes it so easy if you have a performance critical part to just take it out and write it in something else. Plan for database downtime. So yeah, in the beginning, all our database migrations, schema migrations, were simple, because we had basically no use, there should be no data. It gets harder, and at some point in time, we ended up, we couldn't just do our basic table statements anymore, because they started taking significant amounts of time. Fair enough, there are some Qtasky metrics you can do to alleviate some of them. But at some points, you have to introduce a downtime. However, this is a nice trick that helps a bit. Try to decouple data ingestion from data processing as much as possible. A neat way to do it is capture data from the user, put it in a queue, process later. That way, you become much more resilient to having database downtime. Even if it's just for a minute, you need to take it down, do a little change. But if you have this queueing queue as a buffer, it's great. It's not always possible to do this, obviously, but it's a great thing to do when you can. So I have a way to share settings between back end and front end code. We introduced a couple of still bugs a couple of times, simply because we were lazy. We copied things from back end to front end. And then we changed one of them, but not the other. And the front end and back end code didn't agree on values anymore. So this is just sealant's UPD, and there's a very simple solution. We ended up having a settings.json file, which contains our shared settings. It's injected using nginx server side includes. And that way, Python can read the JSON, and the front end can read the contract JSON as well. So super simple. All our shared settings go there, and no more bugs of this kind. So shared settings are good. Duplicating code, we duplicate things like error codes and stuff. Copy-pasted. Now shared settings are not a problem anymore. Have a way to go into maintenance mode. What I mean by maintenance mode is basically a little page saying, we're currently down. Sorry. It's not nice when you have to bring it up, but it's probably going to happen to every one of us at some point. And then it's a great insurance having one. We basically have a very little switch to turn on and off the maintenance page. And when doing the maintenance page, be careful and let it have as few external dependencies as possible, because you probably want to turn it on like when your database ever crashed or something. So don't store the switch to turn it on in the database, because it already crashed. That was our first version that did just that. Also on our maintenance page, we've put into communication tool, where people can talk with our support crew. It's a really good idea, I think, to keep communications open with users, even when bad things happen. Feature flags are a great way to test things out before releasing them to everyone. So at this point in time, we started getting really big. And we wouldn't want to release things we weren't too sure about to everyone. So we introduced feature flags. We have both server side and client side feature flags. So basically, you say, this part of the UI requires this feature, and this part of the API requires this feature. That way, we can do gradual rollouts. We can do beta testing with a limited group of people. And yeah, we can also do things like enabling things depending on which type of plan the user is on. So saying, if you're on the pro plan, you get this feature. So there are very versatile tools to have. If you start thinking in terms of on and off feature switches, I highly recommend very simple to implement great thing in your toolbooks. Accept different quality of code for different parts of the systems. This was personally kind of a hard one for me, because as a developer, you kind of get attached to what you created, and you want it to be super awesome everywhere. But it comes, because then you run out of time. So for example, we require all our user-facing code to be properly tested, performing well, all these things. However, imagine you have a back office report for internal use. It's OK if it performs so, so if it takes five seconds to generate, it's OK. But think about these things upfront before starting to build a new feature. How good does my documentation need to be here? How well does it need to perform? How well does it need to be tested? In an ideal world, everything would be perfectly documented, tested, and perform awesome. But when you need to prioritize, think about it upfront. It helps a lot. And these are basically the most noteworthy things we've learned, kind of not unique things, but surprising things. I'd say, most of them. I'm sure we still have many new things to learn, but this is it for now. Thank you for listening. Who has any questions? OK. SQL Alchemy instead of Django, you said? Django. OK. Well, SQL Alchemy, we actually started out with a different door called Peewee. But for some of our very performance critical things, we didn't want to go right and go SQL. That's why we use an ORM. And we felt that actually SQL Alchemy allows you to drop down at a mid-level and still do really long squares. And I don't think Django ORM is even, I prefer, I can say like this, 90% of the products you ever do Django ORM is awesome. But SQL Alchemy, when you really need to do these weird performance optimizations and use very post-succeed specific features and stuff, I found it a bit better. But we could have done it with Django ORM. Absolutely. However, we already decided on Flask because of simple benchmarking. Flask is quite a lot faster than Django, even if you sweep out all the middlewares and whatnot. So we didn't really have a natural tie into Django, if you get what I mean. So and then SQL Alchemy was a good choice. And I still think it is. Thanks. Any more questions? Thanks. Could you get a bit more detail on the implementation of your maintenance mode page? Yes. We're having to do that currently. Yes. Absolutely. It's a very simple thing. 30-second background of our deployments work. We basically push things to the bucket. We have the servers pull it and update themselves. So entering maintenance mode is basically we run the deployment script through Jenkins and basically check the box maintenance mode instead. So Jenkins deploys, the server picks it up. This takes about 20, 30 seconds kind of. What they basically do is during our build pipeline, executing on Jenkins, we actually have conditionals in the NGNX configs. And basically this is as simple as if maintenance mode show this page, static HTML. Any more? And if I were to add anything to this excellent guidelines, of course, there are endless such guidelines. But what had proven to be useful especially for our company that would be writing utilities for testing server, just small clients because you can write unit tests. But unit tests use prepared environments, not very production made. So if you can just quick run your clients in production and test what fails, that is also good. And I think making everything deployable with tools like Puppet so you can easily just boot a new server and make it build very fast. It is also linked to virtualization. So that is very useful. Great. Cool. About the profiling. So do you use anything else other than having this ability to see the live profiling? We do a lot of things. I just picked this because I think I haven't seen it that much before. But we're heavy users of New Relic. And we use PGstat statements in Postgres. It's an awesome thing. Very small extension, adds extremely little overhead, less than 1% in most cases. And basically generalizes queries. So independent of query parameters, it groups queries for you. And it gives mean execution time average, standard deviation, stuff like that. So if you want to really find slow queries, PGstat statements for day-to-day monitoring, New Relic. And that's basically it for performance monitoring. How do you limit the profiling only to the staff users, I suppose? That's simple. It's you have to log in in the system as a normal user. But then we have a little super user flag for certain users we have put in DB. So that's simple. And the Python decorator called requires super user. So only loud byte. Sorry, people. Anymore? Awesome. Enjoy your lunch. Thanks for coming.
Erik Näslund - Building a reasonably popular web application for the first time. These are the lessons learned when scaling a SaaS web application which grew much faster than any one us could have ever expected. - Log and monitor from day one. - Things will fail, be sure you know when they do. - Choose components which allow language interoperability. - Horizontally scalable everything. - Plan for database downtime. - Have a way to share settings between backend and frontend. - Have a way to enter maintenance mode. - And more... ----- My name is Erik Näslund - I’m the co-founder and Head of Engineering at Hotjar. I'd love to share the lessons learned when scaling a SaaS web application which grew much faster than any one us could have ever expected. Words like “big” and “popular” carry very little meaning, so let me define how big Hotjar is right now using some numbers. We onboard about 500 new users on a daily basis. We process around 250 000 API requests every minute. Our CDN delivers about 10 TB of data per day. We have roughly 3 TB of data in our primary data store (PostgreSQL), another 1 TB in our Elasticsearch cluster, and a LOT more on Amazon S3. These are the key things we wish we knew when we started. They would have made our life so much easier! - Log and monitor from day one. - Have a way to profile your API calls. - Things will fail, be sure you know when they do. - Have a way to keep secrets. - Everything needs a limit (even if it's really big). - Be wary of hitting data type limits. - Don't get too attached to a framework. - Choose components which allow language interoperability. - Horizontally scalable everything. - Plan for database downtime. - Features are a great way to test things out before launching them to the public. - Have a way to share settings between back end and front end. - Have a way to enter maintenance mode. - Require different quality of code for different parts of your application.
10.5446/21134 (DOI)
Hi everyone. Hi. Welcome everyone. Before start, all the organizers and the European society would like to ask everyone to do to the problems we are facing today, like attacks and fear and all these things. I would like to ask you first to have a minute of silence and just to think about things that we as hackers could do to make things better. Thank you. Thank you. So now let's start. Unfortunately, today we set up the show and the stage. First, we prepared an opening, an opening for you, hopefully. Unfortunately, this morning, Oya, who was hosting the stage with me for the opening, was injured, so he cannot be here. So I hope you will forgive any glitches or any problems we have. Welcome. Wow. Was that a gravitational wave there? Anyway, you are welcome to the Euro Python 2016. We are very glad to have you all here and are looking forward to spend an amazing week together. Also, we want to welcome you to the magnificent city of Bilbao, the biggest city of the Basque country, which we are pretty sure you will enjoy as well as the conference. For this opening presentation, we have here the Epps chair. The Epps little flighter. Who could not be here because of an unfortunate accident? The Epps little flighter. The robot of the CIG. So, quick before we start giving you more useful information, a quick look at the attendees evolution. We started very long time ago and we don't have much data before 2008, but we can see we got bigger and bigger and bigger. We have a lot of information, but we don't have much data. We have a lot of information, but we don't have much data. We have a lot of people joining the community. Even more surprising, the evolution in Bilbao went from zero to 1,000 plus people. We are growing every year. Great job, guys. We started the European conference series a long time ago. We are an open community and society. Every year we try and invite more people, more volunteers, more people to help. The best thing to do is volunteer, help us get involved. Volunteering doesn't mean to volunteer just during the conference. We have a lot of things to do. Organizing a conference, that size is really hard. Especially trying to keep a really good quality for the whole conference and the whole experience during the year. So for that, we started the new workgroup concept a couple of years ago. It's working. All the conference was managed by workgroups that are slightly changing from year to year. The main members and the main knowledge remains in the workgroups. I really invite everyone here to join or be interested. Many times, five or ten minutes of your time helping with anything. Or like, per week would be a great help for us. Or eventually, a lot of folks here are skilled hackers and skilled programmers or web developers or whatever. Many times, we are, people find a bug, they file a bug on GitHub and it would take three minutes of their time. But we didn't have a lot of fears helping us with the code. So I really invite you guys to be more in-ghost, to be more involved in the community and helping us and to make a better event every year. We, another very important thing, actually, another big thing to all the volunteers yesterday, a lot of people show up and help us to get through the other bags and everything in a quick time. Another important thing, we just announced a call for interest request call. So every, everyone, every team that is interested in hosting the Euripiton next year can reply to this, this announcement and say, hey, I have this community, I'm involved in this community, I do this or that. And I think we'll be great hosts for Euripiton. Of course, you need to have a plan, you need to have an idea about venues, about budget, about how we can do this. And the, the, the, the proposal of this call for interest and start talking with people during the conference, during the conference so we can actually have more things set up. A call for, for paper will be, will follow up based on this and the interactions of this call for interest. And those are the timelines, more or less, those are not written in stone. So they might change. Cool. Next, I'll give you some basic advices to enjoy the conference. First things, first, the Wi-Fi. You see the password in SSID there, both Euripiton 2016. But please, refrain from configuring the network right now. I really put effort in this presentation and I'll love you to see it. We are very aware that this is in vital need and so we are glad to announce that we have improved the last year's antennas. So, if you are having any problem, put your Ketra cap on and go find another one. Also, we have cable for emergencies. It'll be available for speakers at the help and reception desks. But, just for emergencies, like real emergencies, zombie outbreak kind of emergencies. So, if you are moving on the free stuff, please check your bags. If you have any problem, you can ask at the reception desk right away, except if it's about the t-shirt size. I'm afraid you'll have to wait then. Also, be careful with the pinksys. We will not change the size of your t-shirt if you grow during the conference. I'm sure all out attendees are full of good intentions, but please remember the three laws of robotics. I mean, the code of conduct. So, be nice to each other, be professional, and don't spend. And if you find any issue, let us know so we can help. Fabio? Yeah, sorry. So, the main show is running at this level. All the talking rooms and the sponsored booths, food, especially in drinks. But we also have two rooms, one with floor zero. It's for the training track. And floor one, we have open spaces that will run during all the conference. So, a lot of lunch and coffee breaks will happen in this area, and hall one and hall two. And go around, grab your food, grab your drinks, talk with people, talk with sponsors. And no pinches this year. So, Polly? You'll regret that. So, schedule. We'll be probably updating that. We updated that last night because of last time speakers changes. We have five talking tracks doing events, but much more side events. We have two training tracks. They are free. You don't need to register to grow them. They are large rooms, so we hope to have enough room. But it's really hard to, if everyone would use the interface and give their preference, it would be easier. But it's really hard to find the right room for the right talks. You skipped. Go back. Lightning talks. So, the most important talks starting at six o'clock. There are no more important talks after six o'clock. Who has been to a Europe Python before? Oh, that's a lot of people. I was hoping to recycle jokes. So, who's been to Florence? And Berlin? Oh, there are fewer people. So, first of all, for our Spanish hosts. No. No. I do it on English. So, lightning talks are short talks of five minutes. No longer than that. If you try to announce a conference, you only have two minutes. If you try to skip on sponsoring a recruiting sessions and try to recruit on stage, you'll have exactly zero German minutes to do it. And lightning talk man and lightning talk man from Brexit will... divided kingdom will celebrate the show. So, if we go to the end of the five minute or two minute or zero minute, I will raise one arm and the audience is asked to applaud with two fingers. Test it. And if I raise both arms, yeah, you do a frenetic applause. And enjoy it. As Laura told me, that will be the last human-hosted lightning talks. Because... So, enjoy. And oh, there's a sign-up sheet for those who didn't understand the Spanish. I didn't. It's outside this room. Please sign up with your name. Please spell your name so that I can read it. And the title of your talk. If it's about a web framework, just put in web framework and I'll make up a title. So, just put down your name so I can read it. Thank you very much. Yeah, later, the matter of dragons is left there. The next 05 and 5 years will be only robots. So, we would like to really like to thank all the keynoters for this year. We have a lot of keynoters, really high quality, and we are really lucky to have all of them. Thank you. Side events and things happening. We have a high data track, Django Girls Workshop, Beginners Day, those two happened yesterday at the other venue. We have recruitment sessions happening tomorrow, poster sessions, panels, interactive sessions. There is a local track happening on Wednesday. We have sprints over the weekend. As I said, we have open space during the conference, and you can see we have a board outside to sign. So, if you want to talk about anything you want, just sign there. We have help desks happening in the Maker area, and we have the sponsors' events. Go visit their booth, talk with them, they're really nice. We had a lot of good experience this year talking with sponsors, and they really make this happen. All the show would not happen without sponsors, or at least would not enjoy the very good pinches last year, or the food this year without them. So, we have, if you're new to your Python, we have panels, as I said, interactive sessions. You can talk about your work and reach each other. And we have a new Maker area, the opposite side of this hall, right next to the Euro-Python big writing. Go socialize, we have a lot of tables, socialize with others, don't be shy to speak with everyone. One year is here to talk with others. Talking is really the best part of experience and the confidence, I think. We are recording, so don't, if you miss something or anything, you will be able to see things. We will not record the trainings, and it will take some time, and we will really try to publish the videos as soon as possible. No promises, we would like to do it the next day. It's probably going to happen, but in a few days or a couple of weeks, it should be online. In this part of the presentation, I'll show you the tools needed for the advanced mode. The Euro-Python is full of wonderful people, so the social event is not to be missed. You can have your ticket at reception desk today in Chamorro if you ask nicely and give 20 euros. There are not many left, so hurry up. Of course, face to face is not the only social exchange we encourage. We will use the Hashtag's Euro-Python or AP 2016, and we invite you to use the Telegram channel and mobile apps activities stream. Speaking of which, you should definitely download as it will provide you with lots of information, even offline. It is available for Android and iOS, and you can download it searching for Euro-Python in the stores. The open spaces. Open spaces. We are running open spaces all the week. The C-room is dedicated to that. Open spaces are basically talks that are last-minute things or discussion panels or anything that you want to set up to talk with others that you either didn't have time to propose or you just happened to last week, or you just met someone and you would like to have more talking about those kind of things, or you just want to get together with other folks. So, as I said, we will put a board outside. You can just go sign and put the topic of your interest and create a session anytime and then show at the right place at the right moment. Volunteers. Again, thanks to all volunteers. Anyone can volunteer. You can be a session chair, meaning that you will basically be in a room helping to present the speakers and track time and make sure everything is working fine. You can find volunteers and organizers that are around and ask for help. The organizers are with the green shirts. Volunteers are wearing the red shirts. We have a lounge area with two football tables and some arcade games. Those are the cost is 50 cents and we will be donating this money to a local charity organization. So, we have a few events this year. First, we are very sorry to not have our friend from the authorized commission. So, the first one is you can find this device around and if you find it, you should tweet with the hashtag then we may see you. We will announce the winner on Thursday and we will have a free ticket to a cider house on Thursday. We have a Pokemon Go contest. Yeah, the conference center is a gym. Honestly, I don't know what it means. But whatever it means, the owner of the gym will be the winner on Wednesday at 5 o'clock. You can ask more information later in this afternoon on the desk if you are interested in all this. We have also a Euro-Python photo contest. Every day, we will be giving a social event ticket as a prize. Basically, you should take photos of what you think represents the Euro-Python conference and the Euro-Python community. It doesn't mean that the conference center is really beautiful and nice, but also pictures of people and other things that you think are the soul of the conference. Take those pictures and tweet it. And of course, remember the hashtags when you tweet. Turn off your mobile phones during the lightning talks and the whole conference. Reminders. Before you talk, remind our speakers, check that your presentation is okay. Go check your laptop with them. On your room before you are called. Be sure everything works okay and your slides are good. After you talk, it will be really nice for your attendees to have your material. Please leave the adapters in the room. It's always hard to go and find the right person. It's easy to run out of your spare parts. Don't take your own follow-up. Yes, exactly. That's very important too. No food and drinks. Bringing them from outside of the venue. We have a vegetarian area during lunch. It's really hard in some places like Bilbao to have enough attention to vegetarians. We manage to do that in a way that we think we hope is okay. It's hard to be sure that all the food is okay for vegetarians. We have an area that is only for that. If you're not vegetarian, be sure that you're not eating there. That's that. Enjoy the conference. I hope you have a great time. Thank you.
Fabio Pliger/Endor - Welcome Welcome to EuroPython 2016
10.5446/21136 (DOI)
And I can disconnect. Spoiler alarm. Another spoiler alarm. That looks good, huh? Cool. You want to start? So, welcome everyone to the closing session. Again, thanks to the sponsors for hosting, for having us this year, and helping with the organization. Definitely, we couldn't afford this venue. We couldn't afford the food, the pincers. So, thank you very much. Yeah, and the last thing about sponsors, we're really happy as organizers to see how much you and we're interacting with the sponsors, and there was really good interactions and conversations. So, that's really nice. It's not just about the money. Closing session. Yes, here we gave a lot of numbers. This year we will be more short. We basically have more or less the same number of tickets solo as last year. We have more than 70 light bulbs in the makers area. A lot of lumen. A lot of consumption. Everything in a green way. So, good. Okay, I'm going to take this one. So, we had a few CAC cases this year. We had six cases in total. We solved all of them, and in one case we had to send the attendee home. So, for next year, of course, we're going to try to lower the numbers again. We'd like to ask everyone to pay attention to how they interact with the others. Want to continue? So, this year we would really like to focus more on the people helping, and people really running the conference. That's because of them that we are here enjoying all the good talks. So, we will invite every team to the stage to come here and get some applause they deserve. So, first, the on-site team, please, can come to the stage. Thank you. And of course, there is OER as well. The conference administration team. Finance work group. Sponsor's team. Okay, new one. Communications work group. Support work group. I thought it was finished. I notice it's quite a small team for support. They have a lot of work. Financial aid work group. Marketing and design. There's also meeting there. Program work group. Web work group. This is class OER. Media work group. For Anthony, we have to wait to clap because if the videos don't go live, it's his fault. Cut-off conduct work group. Now we would like to thank and come to the stage, the Django girls and beginners, the organizers. Follow us there. Okay, so now... Okay, so we have 51 volunteers. Anyone in the room, can you please come to the stage and get your applause. All the volunteers. Yeah, volunteers. Yeah. Oh, yeah. A big applause for OER. OER is our boss. He couldn't be here the whole week. Coming to open the venue, he broke his leg and he had a lung surgery. He was very happy about the conference and everybody enjoyed their time. If you miss him, you can come to the... Come to this. Thank you. You couldn't stay or whatever. You are really part of the show, so you can stay. The APS used to be just for the Python conferences. Now we made a lot of heavy lifting in the last years to build a structure to raise some assets and some money to try and help with... We have a lot of people outside of the European Python conference. We will try hard to make this happen for now on, to support the Python community in Europe. You should really apply as a member to the website if you want to help or you just support or you want to know more. This is my last year as chairman. I am retiring like Obama or whatever. I would just like to thank everyone for all the help, all the things and all the good and bad feedbacks because they helped us to try and build a better conference. Those years have been quite hard. It is really... Every time the Python week and all the feedback and interacting with everyone and see how it is important for the European community, the Python in Europe community, it pays back. I am happy to leave, not really leave, just stepping back from that. I will keep happening to make place... I am stepping back just to make place for younger and people with more time and energy. I am getting old. I am getting old. So thank you. I would like to thank the previous board. Most of the board is still the new board. I would like to call the new board and thank everyone and welcome the new chair, Mark Andre. Thank you. Thank you Fabio for passing on the torch. I am going to be the chair of the EPS for this year and I hope to do a good job. I am hoping that EPS will be more open and that is why we opened the mission of the EPS, to be something that is available for everyone in the Python community in Europe. It is not just for organizing conferences. the EPS, we want to do something as sort of like a European PSF thing. And we hope to make this happen. As you can see, we have more board members now. We have two more because we want to spread the load a bit, the workload. Everyone on this board will be doing a lot of work. So organizing this conference was a lot of work as it has been in the last years. And it's a lot of work. We have a lot of work to do. We have a lot of work to do in the last years. And it's not going to change. So we definitely need more help from you. So how can you help? It's actually very easy. You just sign up for work group. You just saw the various work groups that we have. If you think that you could help in one of these work groups, just write an email to the board. And then we will get you involved in the conference. Most of the work is before the conference at least is remote. So it's easily possible to, for example, like we had in this year, to work from Brazil to organize this conference. And we definitely need more help. Of course, you can also then help as onsite volunteer, as the 51 volunteers have done here at the conference. But we also need help before the conference, of course. Then I have another message. We have these. And we have lots of those. We need to get rid of those. So I'd like to invite you to take one and get nine free, at least, at the conference desk so we don't have to take them home again because they're quite heavy. And the idea behind this is that you take this in your community and then you spread the word about Python so you can be something like an ambassador for Python to help spread the word in companies or maybe introduce other people who don't know Python yet to how great Python is. This is a good thing. So for next year, you're probably interested in what the current plan is. For next year, we have started a call for interest, which means that we are asking people to tell us whether they're interested in running the conference. After that, we'll have a CFP, the official one, more formal and everything. Up until now, the CFP runs until next Friday. Up until now, we've had one proposal, CFI proposal from Italy. It's going to be not in Florence again, it's going to be somewhere in the plan in that proposal to have it somewhere in the Milano area where it's not clear yet. We still have to look at the venues. Of course, there may be other teams that submit proposals until next Friday, so we can't really say definitely yet whether it's going to be there or not. But given the plans, chances are rather high, I'd say. It's going to be in Italy. So you can make plans for going to Italy next year. Great. And another call, we need help with the chair down of the conference. So we need to help with taking down all the tables in the exhibit hall. We need to put all the TVs back into their boxes. We need to do something about the way too many bags that we have. We need to take down the whiteboards, et cetera. We're going to manage all that from the conference desk. So if you want to help, please come to the conference desk and then we can then sort out who will help in which area so that we can do everything quickly. Right. And so we would like to issue a safe trip home regardless of where you're coming from. And of course we would like to have your safe trip to Europe in 2017 next year. So we would really like to see you again at the conference. We've heard this at Naomi's keynote. Come for the language stay for the community. Of course, you and Bilbao, right? So it should read like this. We loved having you. And we hope that you enjoyed the event. Thank you. Thank you. Thank you. Thank you. Thank you.
Fabio Pliger/Marc-André Lemburg - Closing Session Closing Session
10.5446/21140 (DOI)
Hi, my name is Fernando. I'm Katya. I'm Paola. We are from Brazil. We are sharing experiences from a lot of countries, Namibia, Paicon UK, Paicon Uruguay, Paicon Japan, Paicon Italy, Europaiton, Paicon Montreal. Experiences about inclusion, about diversity. First of all, diversity is a statement of Paicon Software Foundation. There are always problems, inclusion problems at our conference. At Paicon, three, four years ago, three still Pinterest developers had problems. At SciPy, we also had problems. And we have always, as is necessary, to have efforts to address these problems. Guido, our BFL, always are wearing this Python is for girls t-shirts that's years at Europaiton Keynotes, and is sending a sub-leminar message for us that inclusion is important for us as a community. Definitively, Guido is not a pop star. Guido is always giving attention to everyone at Paicon. Last year, at my breakfast at Paicon Montreal, I took this photo of Guido listening with full attention, a pulveric song with at least 20 minutes, and a lot of, in my opinion, a lot of speakers in Python community at following the Guido way of life at the corridors. Some numbers. In this year, we have 40% of talk by women at Paicon. And in this week at DjangoCon, we have also 40% of talk by women in the blinded, select, double blinded, select process. Special mission to Jessica, efforts to sending a lot of emails to potential speakers. The last week in the FISNI, FISNI is an international forum for the FISNI software, have one track with only women, Pylates Track. In the last year, the FISNI have more than 10,000 people in the conference. This little Pylate girl, she's 12 years old, and she started coding using Scratch when she was 5, and Python with 8. She gave a talk this year at the forum in Brazil, and she was teaching how to code in Python, and she also gave a talk last year at the same forum in Brazil. We have a problem in this track because the speakers are almost a majority of students. In the Python community, have a crowdfunding campaign to cause the travel to the Pylates speakers. When I tweet the campaign, the two first to help are Manuel and Joanna from Argentina. Manuel and Joanna have a project named Argentina in Python. It's a project that are doing jungle girl workshops in many countries in South America. They are travelers, and Manuel and Joanna is a very happy couple, because there is no better recipe to be happy than go through your life doing good to the others. Two years ago, I was at Python UK, and last year at Python Japan. In my opinion, the most fun part of Python UK and Python Japan is a track for kids. This tweet told my 11-year-old son he can go on his Xbox after dinner, and we spent the last past two hours coding in Python on his hospital pie. No one has a Minecraft workshop at Python Japan. No one wants to leave even for lunch. There are kids talks at Python UK last year. Sometimes we are too serious at the conference, and participating in a workshop for children helps not to give so much importance to talks, and have more empathy to with the others. At the first day of Python Namibia, at the end of the jungle girls workshop, we arrived a bit earlier to the dinner party, and the Namibian people are very friendly. The Namibian people gave us a show of thanks and songs. The Namibian people are very friendly. They are very friendly. They are very friendly. They are very friendly. They are very friendly. It was a show. There are some interesting things like going to the visit people and preparing some dinner to the elder. Last year and this year, there are half percentage of attendee were women, and these lighting talks show us that are mixed races. Python Japan theme this year is about inclusions, and how community are happy with this. I will talk about some personal stories. I invited Aisha to share this story, but unfortunately she is at this moment at Python Fili, at Djongokong. Aisha is organized of Django girls. Django girls have some different problems, because some girls do not have personal computers, but it is a very successful workshop, a splitting floor. Aisha gave yesterday talk about this experience at Django Cone. She is co-organizer of first Python Nigeria. The last year in October, I went to my first conference in Python Brazil. I met Django girls and I come back to Rio. After three months later, I organized the first Django girls in Rio, Geneto. We have 402 inscriptions and 30 attendees. After that, we don't have Django girls in Europe. It is a big conference, and I have come to organize it from Brazil. I worked as a Java developer for five years. Eight months ago, I first met Django girls in my city. Then I went to Rio as a coach at Django girls Rio. This is a picture from the Django girls in Rio. After that, I got a scholarship to attend Django Cone in Budapest in March. This year, I got a job as a Django developer in Czech Republic. I moved to Rio and I am working with Django now. This year, it is my first Euro Python. I am with my friend, organizing the Django girls in Bernal in Czech, at PyCon CZ, to be in October this year. This is Carlos, he is retired and he participated in Django girls at the last Python conference in Brazil. We will show you a video. I was depressed for two years, I was very heavy. I started suffering because of the disease until I decided to react. The reaction was to go through the internet and try to find something useful to do. What I found useful to do was to study, to feed myself in some way, and I was eating until Alzheimer's. Through this search, I started to study something, other things, I studied Django. All I found was funny, but on the internet, I even got to Python. Through this conversation, this interaction with them, I am learning Python. Python is a very interesting and catchy programming science. Through Python, I met Raspberries, Arduino, today I study Arduino, Raspberries. I came here to personally give this testimony to Henrique Basso, to meet Fernando Massanura, who is here. Through the good will of these people, who propose to teach graciously, and encourage other people to study, to run after me, I found myself again with the age I have, already an opponent for immaturity, to have a perspective of life and to have a very serious reason to continue to live, to study and to progress. Thank you. Also, the Pilates of Sao Paulo, all the co-founders are from the other fields. They are not programmers. In free events, there are a lot of no-shows. But in only one event, there are zero no-shows. Last year, at Pilates events, and the Valentines days, Pilates decided to make a workshop for couples. There are some links. How to implement your code of conduct is a blog post for Ola Sedenka. It is a post about a talk, given by Baptiste and Ola yesterday. It's a very interesting link. And the last one, last obvious conference checklist, I think is tangible and concrete. But in part, community as Nepal, this picture is a free software community of Nepal that submitted a project to teach Python to Python Software Foundation. In part, community as Uganda is African-Uganda community. This picture is from last Python Iran. But most important, important community as me, because you are a part of this community, so make things happen. Thank you. Thanks. Questions? Thanks for the awesome talk. I've got two questions. First is, let's say, I am willing to participate as a coach. Where can I go? Is there a mailing list or something to offer oneself for helping organizing some event or something? The other one is, I want to be a bit critical. Many of the countries you are showing here belong to the former colonies. I'm now living in one of the poorest countries of South America. I do believe in code literacy, but my main fear is that probably through community, we are training the next generation of colonialism. Because the best programmers go to work with Facebook. And that means getting the resources away from the countries we are working in. So how do you feel that... My concrete question is, how do you feel, each of you, that learning to code actually changes everyday's lives for people? Okay, let's move here to the coach. At the Mibia, we are a lot of different attendees. There are some girls that are one of social science, there are feminist activists that want to make a blog to spread the social activities around the Namibian country. I think there are great impact. I saw the slides of Aisha yesterday. There are three stories about the three gender girl organizers at Nigeria. And the impact of the three girls in Nigeria is very big, because there are social areas with great impact. And the Pai Korn Namibia was at January. And we have six months. In six months, there are six jungle girls at Nigeria. And I think 180 attendees and a lot of people interested in participating in the community. And the first, first, Pai Korn Metap men and women at Nigeria occurred because of the organization of jungle girls' bloggers. So there are a lot of things around the inclusion. Thank you very much for your talk. It's a lot of fun and really enjoyable to go to places like Namibia. But the end result also has to be that it makes sense from an economic or a business perspective in the end. So that the end result has to be that the software developers in those countries become significant enough that they're bringing money into those countries and developing the economies and skills in those countries. And do you have any idea how we can best achieve that or what the best thing to do is to advance that so that as well as all this really important community work that we're doing, there are actual real economic advantages to be gained by them? Good question. There are some companies in our cities that are moving from Java to Python because of the development of Python communities. Because, for example, my city is, I create an online course and we have some companies that decide to move our entire stock from Java to Python because the community. And now earning more money and have more projects because the ecosystem created by Python community. Good question. And work better. More questions? Any appreciation? Okay, sorry. I'd like to ask another question and we were talking about this. Why do you think it is that in places like Namibia and Nigeria, which are not, which are quite patriarchal cultures in many ways, actually in these conferences in Python Namibia, we had about 50% women? Why is it, and in Europe we struggle to do that? Why are they so successful and we're not? I really don't know. Because we are discussing this dinner two days ago because at South Africa is the country very near, is very different. So at South Africa there are some issues in racial, my opinion is kind of, I think, the friendly nature of Namibian people. I quite didn't understand this 50% that you have mentioned in several places happens by chance or is there any active position of the organization to make this happen? The attendees of the conference, 50% are women. Yeah, yeah. But the attendees are women. My question is when you organize the event, do you say we have 50 places for women and 50, it just happens? I don't know if any, do you mind if I ask that or does anybody else actually have a question before I do that? If someone has a question. So about 50%, I can only talk about Namibia, Fernanda couldn't talk about more places because he has more experience, but in Namibia about 50% of the students studying computer science are female. So it's not the case of South Africa, for example. But in Namibia 50% of the students, of IT students are women. There is a lot of different of the other countries. And even if you make huge efforts to address the imbalance, they're not always successful. It takes a huge effort to make a small effect and in Namibia we made efforts, but I've never seen a conference like that. There's probably somebody else who'd ask. So thanks.
Fernando Masanori Ashikaga/Paola Katherine Pacheco/Kátia Nakamura - import community One of the biggest differences, in the Python community, is its effort to improve diversity. The authors will share experiences on diversity obtained from ten different countries: Namibia, UK, Japan, Brazil, Italy, Argentina, Uruguay, Germany, Canada and Spain. There are other reports, that also we would like to share, which are only beautiful stories of how Python reaches the most distant people and places you may never have imagined. ----- One of the biggest differences, in the Python community, in relation to other communities, is its effort to improve diversity. There is even a Diversity Statement at PSF: "We have created this diversity statement because we believe that a diverse Python community is stronger and more vibrant. A diverse community where people treat each other with respect has more potential contributors and more sources for ideas." The authors will share experiences on diversity obtained from ten different countries: Namibia, UK, Japan, Brazil, Italy, Argentina, Uruguay, Germany, Canada and Spain. There are other reports that we also would like to share, which are only beautiful stories of how Python reaches the most distant people and places you may never have imagined.
10.5446/21143 (DOI)
Welcome all to how to make Cervesto with Python. Here I have Chesquidwal. Give him some claps. Well, thank you for being here. Welcome all. My name is Chesquidwal. I am a backend developer at Element Interactive. It's a company from the Netherlands. I'm going to talk about how to make a how to brew beer with Python more or less. So there are some things that I would like you to leave here to leave this room with some ideas and some of them are. I would like you to know how to build an IoT backend. Which are the technologies, the protocols and the tools that are out there. And I will do some comparative so you get to know what's out there and also which is the one that may fit you best if you have to do a project like this. Some backend considerations that people will not tell you about. But you as a backend developer, you need to make sure that they are taken into account. You will meet a full running architecture for an IoT backend. And of course, you will learn how to brew beer. Actually, you're not. Sorry. If you want to leave the room, you can skip 30 minutes. So first I want to talk to you about MiniBrew, which is the project for which we developed this platform. MiniBrew is an amazing machine. You can actually brew beer with it. It will guide you since getting to know new recipes through a mobile app. Then you can choose the recipe you want to brew. It will be sent to your home and then you will just pour the ingredients into it. And the machine will take care to do everything from the first until the last part of the process. And it will also teach you how to brew beer, even if you have no idea at all. So at the end, you will start having no knowledge about it. And after several brews, you will be a total expert. This machine comes with a mobile app. The mobile app is going to show real time data of what the machine is doing at this moment. It's going to show what is the current temperature of the ingredients that are inside the machine. What is the current step? What is the target temperature in case you're in a very cold place, a very warm place, and it takes longer to get to that temperature, etc. It will also help you to see if there is any error in your machine. For instance, it loses connectivity, anything like this. It will also tell you. And it will also allow you to start and stop brewing sessions. So you will actually only interact with the machine through the mobile app. So this is not everything that the project has, of course, because otherwise, I wouldn't be here giving a talk. Actually, there is Python in the middle. Yes. So, well, not only Python, also some others. So I will focus the presentation on actually explaining to you what you have to put in the middle of those two things for it to work seamlessly. So let's get technical. First, I want to share the project requirements that probably when you get a customer or you want to build something like this, you will have. We will need to deliver real-time data. We want to have from the mobile phone. We want to know exactly what the machine is doing in real-time or real-time. That's fine as well. But quite close. We will need to have security in the communications. We're actually sending sensitive information because the recipe sometimes will be copyrighted and we cannot let anyone see it. We will need a fixation because in case somehow the channel got compromised, nobody would be able to see what actually is going through the channel. Authentication. If we get somebody knocking at our API or our platform saying it's a device, is it actually a device? And if one device gets compromised, can we disable it? To weak communications, of course, because we not only get real-time data from the mini-brews, but we will also send actions to the mini-brews. So we need both weak communication. Resiliency, this system has to be resistant. And in case it falls, it needs to go back fast because people expect responsiveness in their devices. And also it has to be lightweight. In the IoT, usually we got, and this is the case as well, we got really constrained environments. And this makes us have to mind every kilobyte that we use in our project, especially in the device. Also we have to take into account that the packages that we send, information that we send, the bigger it is, the slower it will be. So that's also two considerations on lightweight that we have to take into account. And some other project requirements that we may get, the last known status. What happens if a machine suddenly goes offline? We need to know what it was doing right before that will help us debug it. The bagging of the machine, of course, for the production staff, they need to be able to take a mini-brew that is not in correct state and be able to debug it, send different channels of communication only for the bagging purposes. We will need an admin site also for being able to set up the users, the mini-brews, their keys, everything. A mobile app API. I don't need to explain that if there is an app. Rainbows, et cetera. By this I mean that this is a project that is going on, is growing. So we have to take into consideration that we may get new requirements that we didn't get at the moment that we were delivered the first round of requirements. Actually this is usually in every project, right? But in this case, as being a startup, it's more likely that this will happen. Then, the other thing that I mentioned before, as a backend developer, you need to take into consideration some things that not always will come as requirements. For instance, you need to take into account scalability, which is a word that is overused. It's in every job offering, it's in every company that has a scalability, every project is scalable. But actually in IoT, on this project, we have to take into account when building the architecture. Proven technologies. By this, I mean that we cannot use a technology that is kicking right now and in one or two years is going to be obsolete. We cannot afford that. Small tech stack. This is important for several reasons. It's going to be easier to maintain for the people that goes after us. It's going to have less errors. The connection tests are going to be easier. We're going to have a much easier life. Error tracking. If there is something going wrong, we want to be able to debug it. This is the usual that we do in Python. But before entering a project, you have to take it into account. Reduce the data transfer. As I mentioned before, reducing data transfer will do two things. One, make faster messages. And the other, will make cheaper bills in our cloud provider. And documentation. The most important one. Because we all love documentation and we all love to write documentation. So this is something that we have to take into account. Actually, nobody writes documentation. So this is the phase that you may do when you see this list of requirements and you never tackle the project like this. So let's do something and let's go step by step. Let's go step by step and let's start with the communications protocol. This is a requirement that this is a decision that depending on how we make it now, it will affect the possibilities that we have later for technologies or software. So we're going to analyze very fast the protocols that are out there. There are some of them that are already out of scope. We are not even considering them because of throughput. But I will mention them because you could probably build IoT like this. Like HTTP, XMP, DDS, AMQP. AMQP is good for servers, but not so much for IoT. Then we got MQTT. MQTT is one of the big players in the protocols for communicating with IoT. It was developed by IBM and then given to the Eclipse Foundation as open source. MQTT is especially tailored for constrained environments and has some good things from AMQP. For instance, it has quality of service where you can decide if you want to deliver the message one and I don't care what happens. Or you want to deliver message one and only one. So you have a range of qualities between that. That's a very nice feature. It also allows routing based on topics which allows for spreading messages through several receivers. So this is good. There's another one. There's COAP. COAP was developed by the Constraint Resources Group at IETF. It's also one of the big players in IoT protocol for IoT. COAP is slightly different. It uses more client server configuration and uses HTTP burps, even though it's not HTTP so it doesn't have its disadvantages. Both are targeted for very constrained environments, but even though they are very similar in that, they are very different in some other things. So for instance, they are very different in the way that MQTT uses a broker in the middle and everyone connects to it as a client. So then you can do books of all of this kind of stuff. And COAP uses a client server both ways. So then that's a big difference already. Another thing is that MQTT uses long-lived TCP connections while COAP uses UDP. So this is already another big difference. For these reasons, actually probably you could do with both, but for these reasons we chose MQTT. So now that we have our protocol, it's time to look for a solution. So out there you will find a lot of already companies that they claim that they have a back-end ready for IoT so you don't have to program anything and that's kind of true, more or less. So we're going to analyze one because there's not time for analyzing more. I want to analyze one of them, one of these comprehensive back-end solutions and see if it fits our requirements. So I'm going to analyze AWS IoT from Amazon and see if it's actually a good choice for our case. I'm going to go very, very fast through the architecture because Amazon declined to sponsor my talk so I will go very, very fast. So there's some things that we like from Amazon. For instance, authentication, authorization, registry. This is one of the requirements that we had. We needed to authenticate the device, the machines one by one individually in case one was compromised. Amazon provides that. MQTT, of course. Here's the broker. Device shadows. This was also a requirement for us. We needed to know the last known status. Here it is, provided out of the box. And an API. Also, we needed an API for our apps. Here it is, out of the box as well. The small disadvantage, the small disadvantage being picky, is that you're going to have to use a very generic API and maybe it wouldn't be as reduced as if you would do the API yourself for only your purposes. It's documented because you were wondering that probably. And it's out of the box ready for us. Some things we don't like. The SDK. Well, this is a really small thing, but the SDK used a couple hundred kilobytes and that was actually too much for our constraint device. So we couldn't really use the SDK. Actually, yeah. So probably in another project, this wouldn't be a problem. But in our case, it was. Then there is the rules engine. Amazon, this is not a problem in itself. This is good. But it's good if you're using other Amazon products. Amazon makes it really, really easy to connect things between Amazon. You can send information, set rules, notifications, everything. But only if you're inside Amazon. And if you're out, you need to do quite more work to connect it to your external part of the platform. Now, the thing that we didn't like too much is that the applications have to connect using an API. And an API doesn't sound really too real-time, right? Having to do an HTTPS call every time that we need to get the latest tick of information doesn't sound too good as well. So let's analyze the project requirements as we had them. Let's analyze what Amazon is achieving already out of the box. So we got security. It's a secure channel. We got authentication. We got two-way communication. We got good resiliency. Amazon claims they are very good at that, and I think they are. We got a lasnone status. We got an API. We got a lot of scalability as much as your wallet can afford. We have proven technologies. Well, proven technologies. Amazon has not been out there for long enough to be called a proven technology. But, I mean IoT. There's a full team working on it, so we can be quite sure that it's not... The support is not going to be dropped in one year or two years. So that's why we tick the proven technology, because if they have bugs, they'll fix them. At least we expect. And documentation. Most important. So some things that Amazon doesn't provide us, we're not happy with the way that they were giving us the option of real-time data. We don't have a fiscation because the data that we're sending is JSON. That's the format that is used. Lightweight, for the small reason that I commented before, that we couldn't put the SDK into our device, but actually because we're using JSON, so we're actually sending more data than we should. I'll go to that later. Rainbows, et cetera. I didn't mark it. Well, actually, I marked it red, because it will be hard for us to implement new requirements if they do not fit nice into Amazon, because then we'll have to do them ourselves. Outside. Small tech stack. There are some things that we will not be able to do in that stack, like admin side debugging, all of that. So what happened? At the end, we will have to have our own admin side, our own stack of technologies for having that running, and then we will end up having two big pieces of software. The one that is related to Amazon and the other one. So this is why it cannot be considered like a really small stack. And then the reduced data transfer. If everything has to go through the server one, and the other we're using JSON for this, it's not going to be considered like a reduced data transfer. So the conclusion for this is that you can use it, but only if it really matches what you want to do. Otherwise, let's check how to set up our own solution. So first thing is to get a broker. So we need to get a broker that supports MQTT, and we have like a lot of options, a lot of them. There is every week or two weeks probably there pops a new company that has an MQTT server. So I'm just going to show a few ones. ActiveMQ, Mosquito, EMQTT, BurnMQ, HiveMQ, CloudMQ, RabbitMQ. So there's not a lot of creativity in the names for these kind of companies. But I mean you could choose any that you wanted. Some are, you pay them and they are already deployed. The others you deploy them yourself. But we chose RabbitMQ for going very fast for some reasons. It's been a top player for many years. It has scalability proven, both vertical of course and horizontal. It can convert from MQTT to other protocols, which is a goodie. It's not something that we require, but MQP is something that servers like. There is no payment for use. And we were familiar with it, so these also counted. So that's what we chose this, but probably if you chose any of the others, it would be good as well. Now, we got an extra bonus for doing the broker ourselves. And is that now we can do a Pooksup. And the minimum we can send the information to RabbitMQ. And the devices, all of them that they want to listen to the actual brewing session, they can be listening to Rabbit. So the information for real time doesn't need to pass through the server anymore. So what is Python doing while the devices get real-time data? Chillaxing. There's no need to do anything. Now what? Let's talk to that broker. We got a broker, right? So we'll use Python now, finally. There is a library from Eclipse. And I promised myself I wouldn't show any code because this talk is beginner and also it's very short in time. But now that I already showed it, let me go over it fast with it. So it's so easy as you only need to import the MQTT module. And then you just set a few callbacks. On connection, subscribe to a topic. On disconnection, on message, do something with that message. You connect to the broker and voila, you get it. It's so easy to get an MQTT server in Python. So we got one thing fixed. Time to look at the API. In Python, of course. So options. Here I could go through all of the options. And actually there would be a lot of fights if we started the discussion. Only discussing this slide could take like a full talk. And only discussing each one of them would take a full talk as well as each one of them. So I'm going to just go over it fast and see which one we chose. Okay. So bottle, Django, Tasty Pie, Flask, Falcon, Django Res Framework. And there are even more. Actually we chose Django Res Framework because we, of course, we were familiar with it. We already had a part of the API with that. And, well, it's scalable. There is loads of the augmentations of external plugins and apps. And there's everything you may need. You can cache everything. So it's really good. And now with the latest version that I just got to know a few days ago, there's really good, there are some goodies like automatic documentation and stuff that you should really check out. So that's it. Oh, sorry. Okay, so how are we doing right now after the choices? We got real time data now. We got security because through broker you can also connect to SSL. Two-way communication thanks to MQDD. And we reduce the data transfer by skipping the going through the API. So things we also got is the last known status because we programmed it. Now it's our API. We got the bagging, we can use the bagging and we can use the admin site to help the staff debug the machines. We got the admin site with Django, of course. And we got the API we mentioned. We also got error tracking because we are familiar with tracking errors with Python. So that can be considered that it will be easier for us to debug Python than to debug any other stuff. So some of the things we already got by using those two is resiliency and scalability. But this depends, of course, on how you do your DevOps, how you're going to deploy, you're going to deploy in clusters, et cetera, et cetera. But it's full of documentation of how to make these systems scalable because they are very famous technologies. We got rainbows, et cetera. Sorry. We got rainbows, et cetera, because now it's Python and we can do everything we want with it. We all know that, right? And proven technologies, of course. Otherwise we wouldn't be in this conference. Small tech stack. Now we got only these two pieces of software that we really understand. And some things that we still don't have. We don't have a fiscation. The messages are still being sent through JSON. We don't have authentication. Everyone can connect to the broker and just start messing up there. And we cannot say it's lightweight on the device because we just sending messages to MQTT that's super light. We don't need any library for that. And it's also, but it's not lightweight that we're not sending the small amount of information. We're actually sending more information than needed. And documentation. It's fine. Guys, it's full of documentation for these two things. So let's start solving the remaining parts. Authentication in Python as well. There is a really nice plugin for Rabbit MQ. It may sound a little bit tricky. Let me explain it. It's a Rabbit MQ authentication back in via HTTP. Okay. What does this mean? Now, when the device, when the machine wants to connect to the broker, instead of the broker just saying, yeah, you can connect or you cannot connect, it will pass that information to Python. And Python will check the database and say if that machine can actually connect or not. So then we don't need to have all that in the broker. It can be in our normal database. Nice. And response. And remember, it's a long-lived TCP connection just once. That's really cool. This works for devices as well. So we checked another one. There's authentication now. So let's go for the last one. Obfuscated and lightweight messages. So how are we going to do this? With Python? No, with Google. So if you don't know protocol buffers yet, you should. If you use any kind of API. It's a protocol from Google that actually, well, this could take also another 10 minutes. This could take also another talk just to explain, but I will try to just go very fast and explain it. But as you see in this example, Jason, that the device could send to the server or to the machine could send to the server or the device, you see that in each one of these messages, there's going to be a lot of repeated information. You're going to send the ID every time, the ID string. You're going to send the timestamp by string. You're going to send the data action, sensor 1, sensor 2. And then there's going to be a lot of information that is going to be repeated over and over and over. And that's going to consume data. And that is also going to make it slower. So we don't want that. That's also is really easy to read. How are we going to solve this with protocol buffers? Protocol buffers, actually, what it does is that this specification of how the message is built is shared between the parties that are communicating, but it's not transmitted through the wire. You got the specification on both ends and you only send the data that is different every time. If anybody from Google was here, probably they would shoot me because it's not really like this. But this is an example. So you see actually what is being transferred. This Google also didn't want to sponsor my talk. I don't know why. So I'm not going to show you how it works, but this is really easy. You know, just build which fields your message will have if they are required, optional, so then you can change the requirements. They are always backwards compatible, and then you will not have problem if you're upgrading the message that you sent and somebody is using an old spec. So how are we doing now? Of course, both things checked. We are happy with the solution now. This is a very, very small final architecture. It's very, very simple, but you can see that we got the web servers. We got the three blocks that we talked about, some endpoints for RabbitMQ, an MQTT listener and responder. Then we got the Django API for the devices. We also got the Rabbit broker for devices and machines, for the web servers to communicate. Of course, database, cache, and everything that is usually used in a web platform. So everything is scalable because everything is in the cloud. All of those technologies are easy to scale. So this is it. This is already it. I was probably, my boss is going to watch me on YouTube. So I want to tell you that I work at a really cool company called Elements Interactive. We work at Barcelona in Spain and Almeria in the Netherlands. So if you want to do cool projects like this, be sure to enter or just contact me after the talk and I'll be happy to explain what we do and how we do it. So thank you very much for your attention. Any questions? Yeah. What about unit testing? I hope you'll write. Sorry? What about unit testing? Yeah. I hope you'll write them, right? Yeah, yeah, yeah, yeah. We got them, we got them. And documentation. The question is which test runner are you using? We are only using the default one, I mean. Okay, is it a PY test or a NOS test? Right now we're using the test cases, but we were thinking of trying PY test, but we still didn't do it. So right now we got very good coverage, but on test cases. How does the beer taste like? Hi, I have one question. Do you have a hardware embedded security? Because I wonder what if someone hacks the API and make like an increased temperature impression, make like a remote bomb, essentially. Or set the mini-brew on fire, right? Yeah. Actually the mini-brew is also intelligent and it does a validation of all the recipes and so there is no way that anything that could break the mini-brew is accepted. Actually if you send a recipe to the mini-brew and it finds it's not good, it will just answer that it could not be started. With some information of course, but yeah, there's no risk for bombs. If you want to buy it, it's safe. Any more? Have you thought about using Trift instead of Google Protocol Buffers? If yes, then what's better about Protocol Buffers? I don't know. Protocol Buffers better than? Sorry? Trift. Apache Trift. I don't know about it. We were pretty happy with Protocol Buffers because they had everything we needed. The benchmarks were very good. Actually we just found it. This is really cool. Stop here. Actually we didn't try that. I'll write it down. I was just wondering what kind of data do you send from the mini-brew to the app? Actually I don't really know exactly because that's more of a hardware thing, but there is like a zillion sensors around all the mini-brew, pressure, if there is water flowing around, which is the current status, if there is action spending from the user, what is the next thing that it's going to do? I mean there's a lot of information. Actually the JSON I just showed it was like a ridiculous example because it actually made much bigger. So yeah. Have you tested what the scalability limit of your stack is? Have you tried to topple the thing? No, not yet. Not the top. We're actually working with mini-brew sending, like if I don't remember wrong, it's mini-brew that we're testing. There are quite some PCBs testing at the same time. Each one is sending like 60 times more information that it will send in the real world because we're really, really logging everything into very detailed. So right now we can handle already with a really small cloud architecture, and I mean really small. We can handle like hundreds of mini-brews already. So when we just make a normal production deployment, it's going to be able to handle thousands. And of course we're going to implement scalability, clustering and everything. Thank you all for coming. Thanks, Jessica.
Francisco Igual - MiniBrew: Brewing beer with Python Dutch startup MiniBrew intends to disrupt the beer market by introducing an easy-to-use beer brewing machine controlled by a mobile app and communicating with a Python backend. Users want real-time insights in their brewing process, which presented some challenges in terms of architectural design. In this talk Elements Interactive's Chesco discusses best practices and pitfalls of the IoT architecture of MiniBrew by diving into message queues, protocol buffers and full- session logging. ----- The number one alcoholic drink in the world is undoubtedly beer. With the rise of craft beers, also homebrewing has become very popular in recent years, although it is still a complex and expensive hobby. Dutch startup MiniBrew intends to change that with their revolutionary beer brewing machine, which is controlled by a mobile app and communicates with a Python API backend. In this talk Chesco will share his ideas and experiences in utilizing Python in the backend architecture for the MiniBrew project he and his team are working on at MiniBrew's development partner Elements Interactive. As many IoT projects, the ingredients for MiniBrew are a device with a limited chipset and internet connection, a backend to store the data acting as the mastermind and a mobile app to allow end users to control the brewing process. The fact that we want users to know in real-time how their beer brewing process is doing presented some challenges which required us to come up with a competitive architecture that would both give real- time status updates and not saturate the server with continuous calls. Chesco discusses best practices and pitfalls in designing and developing IoT architecture by diving into the RabbitMQ message broker, the MQTT protocol and protocol buffers. He will focus on the REST API and CMS site written in Python, elaborating on high frequency data in the apps, scalability, full-session logging and overcoming common architectural challenges.
10.5446/21144 (DOI)
Yeah, our keynote speaker for scientific Python and or on the PyData track today. I think the most of you know Gail already. He's like the core member, like one of the main contributors to the scientific stack. Yeah, please welcome Gail. Okay. Am I on? Good. So, screen is working. Slides are working. Cool. Okay. So, thank you, everybody, for coming here. Thanks to the organizers and to Alex for the introduction. So, I think we'll agree that your Python is pretty cool, right? Yeah. Yeah, yeah. Right. So, the Sider event was really cool yesterday. So, I hope you all got coffee this morning. I did. Okay. So, what I'd like to do in this talk is to address a bit the very diverse community that we have here. And so, what this talk tries to be is a reflection on what we have in common, which is Python. So, I'll be talking about things you don't understand, which is my science, and things that I don't understand, which is web development. So, I don't know how I get into these horrible situations. Anyhow, I did at some point a PhD in quantum physics. So, I think I qualify as a scientist. But these days, I do computer science for a neuroscience. So, what we try to do is that we try to link neural activity, so firing of the neurons, basically, to thoughts and cognition, like what you would do when you drive a car. The way we do this is we use brain imaging. And specifically, we pitch this as a machine learning problem. This is what I do. And we've developed Python software to do this, of course. So, if you want to try this, you can actually do prediction of things like visual stimuli based on recordings of brain activity using this open source software and open data. You can go online. It's there. But I won't be talking about this today. So, on the way, we created a machine learning library, which is known as scikit-learn. So, I say we because it was many people. It was, of course, not only me or my lab. And so, it was a huge success. We suddenly became cool. Because data science, as you might have noticed, is a fairly cool thing these days. So, these days, Python is the go-to language for data science. So, I'd like to think a bit about how did that happen? Because we did build scikit-learn and other built pandas and other tools. But these were built on a solid foundation. And Python is really giving us that foundation. So, to set up the picture, scientists do have a reputation of being a bit different in the Python community. At least historically. You may say that they come from Jupiter. But then, to us, web developers are very different. And actually, most scientists do not know what a DevOps is. I saw these kind of discussions. What do you do? I'm a DevOps. What does that mean? Okay. So, we're different. For instance, what developers worry about strings? Well, we worry about numbers in arrays, of course. What developers care about databases? Well, we think in terms of arrays, of numbers, of course. So, you might think of object-oriented programming. No, arrays are good enough. Flow control. We can actually do with arrays, right? So, there's a bit of a culture gap, right? All right. So, let's do something together. How about we sort the Euro-Biton website? I mean, there are too many abstracts. 205. I can't read them all. And they're hugely varied. They go from open stack to making $10 million with a startup. So, let's find some using data science. And so, the way we'll do this is that we'll do a bit of web scrapping to get the data from the website. I could have asked the conference organizers, but that was boring, right? And then, we'll do a bit of text analysis, and then we'll do data science, and we'll give you topics. So, the nice thing about this example is it walks us through a good part of the whole Python stack. That's why I like it. So, we're going to be using things like URL lib, or beautiful soup, but also scikit-learn, and matlotlabel, wordcloud for plotting. So, the first thing that we're going to do is that we're going to crawl the website. And so, our goal here is to get a schedule. From the schedule, I mean to retrieve the list of titles in URL. And then, we're going to just crowd the pages and retrieve abstracts. And I've been doing this using beautiful soup. If you've never used it, it's an awesome library that allows you to basically do some matching on the document object model tree of an HTML page. So, it's really awesome. The scientists would never have developed that. Then, we're going to vectorize the text. The idea is that if you get a text, it's a bunch of words, right? Or characters. So, for each document, we're going to count how many times a word appears, and we're going to put this in a table. So, we're going to call this the frequency, frequency for each term. So, here we have a term frequency vector that's describing my document. And you can see that the most common word is A, and then the Python is very common. So, maybe that's not a very good description, because some of these terms are all over the documents. So, what we can do is that we can do the ratio between the terms all over the documents, the frequency of the terms over the whole database, and the frequency of the term in the document. So, we call this the TFIDF, so, term frequency inverse document frequency. And you can do this with Psychet Learn using what's called the TFIDF vectorizer. Okay. So, now I feel a bit more in my comfort zone. I've gone from text, which I don't understand, to vectors of numbers. It feels better. So, if we look at all the documents, then we have a matrix, right? A 2D array that gives us the terms in the document. So, it's the term document matrix. This could be represented as a sparse matrix, because most of the terms are present in very few documents, right? So, we can use the side by stack to use sparse matrices. And the good news is that the scientific community, not even the scientific Python community, has developed lots of fast operations for sparse matrices. So, we're doing text mining with things that have been developed by people who do partial differential equations or things like this. Cool. Then we want to extract topics. So, what we're going to do here is that we're going to do matrix factorization. We're going to take this term document matrix and we're going to factorize it into two matrices, one that gives the loadings documents on various terms, and the other that gives the loadings of, no, sorry, loadings of documents on what we are going to call topics, and then loadings and topics on terms, right? So, here, the first matrix tells me what documents are in a different topic, and the second matrix tells me what terms are in a different topic. So, this is a matrix factorization algorithm. So, once again, I'm back into things I know as a computer scientist. Often, we do this with non-negative constraints in text mining because the fact that a term is negatively loaded on topic might or might not mean something. You can do this in cyclern.dekomposition.nms for non-negative matrix factorization. That's where the magic happens. So, we run this and we get word clouds. So, that's the representation of the first topic and what is it about? It's about the Python language. Good news. The second topic is about, well, science and machine learning. And then the third topic is something like testing. And then we can look at all the topics, and there's a bunch of different things. You may have asynchronous, you've got a topic about the community, one about basically conference organization, internet of things, best practice, and one I'm not showing here which is talks in Spanish. Or BASC. Okay. So, as Python is not only a numerical language, we can also output a website from this using a templating engine. And if you work a bit, I think you can get a reasonably usable website. So, it's on the web. You can have a look at it. And there's a link to the code that actually generates all this. So, you can run it if you're interested. So, you want to try it. Okay. Pip install scikit-learn. Ah, no. It complains that NumPy is not installed. Alright. Pip install NumPy. Bang. It wants a C compiler. Now you're starting to get angry at me, right? So, it's back to the fact that we're different. Historically, we've had a lot of problems with, well, people don't have fortune compilers. Why don't you guys have fortune compilers? Why are you laughing? Fortune is giving us really, really fast libraries. I mean, between a NAVSE implementation of matrix operations and a fortune optimized one, you can get the factor of 70 of difference. The factor of 70 is something, right? So, packaging has been historically a major roadblock for scientific Python. And the reason is we really rely on a lot of compiled code and shared libraries. So, we've been hitting problems like the fact that libraries were not there or ABI compatibility issues. Now, the good news is that there is a huge amount of progress for two reasons. The first one are wheels and specifically recently many Linux wheels. So, the ID being that you rely only on a conservative course set of libraries. So, that basically is solving. So, the problem I showed shouldn't happen anymore. It should work. You can try it. Tell me if it doesn't work. And the other reason is that there's this thing that's called Open Blast which is linear algebra not using Fortran. So, that's good news. By the way, Fortran is a very modern language that is super performant because it allows you automatic vectorization which C cannot do because it's got different semantics. So, don't think that Fortran is something from the 70s. Well, it is. Okay. So, we're different. But if we work together, we can get really awesome things. So, for instance, I hope that you can get this example to get text mining in any of your websites. It should be easy to do, right? Really? So, it's magic. But you can use it. All right. So, now, let me help you think a bit more like a scientist and how we code. And you know what? It's mostly about numerics. So, we really love NumPy. You know NumPy, right? It's the numerical Python library. It's matrix operations. Arrays operation. So, the reason we really love NumPy is because it's fast. So, let's try, for instance, to compute the product of term frequencies versus inverse document frequencies on 100,000 terms, right? So, we can do this with Lisk comprehension. And it takes six milliseconds. Now, six milliseconds may not sound a lot. But when I do say non-negative matrix vectorization algorithm, I do these things many, many, many times. And actually, 100,000 terms is not big data. It's tiny data. So, that is actually a toy example. Now, if we do this with NumPy, so, the code is slightly different, and we get 70 microseconds. So, that's almost a factor of 1,000 speedup. Another thing that we really like is that if you're used to it, it's actually very much more readable. A rate computing requires learning it. But once you've learned it, it's extremely readable, right? Compare the TF times IDF to compute TF times IDF to the Lisk comprehension. So, it's important to realize that arrays are actually, to us, nothing but pointers. What defines an NumPy array is a memory address, a data type, a shape, and a stride. So, the shape and the stride are things that tell you how you can move through the array, and basically, you're moving through the array by pointer arithmetic. You're just moving from one point to another by computing offsets. So, what an array represents is regular data in a structured way. So, this is really important because it matches the memory model of just about every numerical library, whether it's in C, C++, Fortran, or actually, I believe, all the languages, most languages. So, it allows us copyless interactions across this combined language border. So, for me, the value of NumPy is really that it's a memory model. So, let's look a bit at why it's fast. So, if you're computing TF times IDF, one thing is that you're not getting any type checking during the operation. You're getting all the dynamic type during the computation to know what TF times IDF will do, but then it's compiled code that runs the operation. But then, maybe most importantly, you're using direct regular sequential memory access. So, you're just grabbing your data. There's no pointer dereferencing. Well, there's one, but after you're done, you're just grabbing chunks of data from the RAM or from the cache. And that's really fast. And so, then, your CPU or your math kernel library can implement things like vector operations using, for instance, SIMD operations. So, that's what really, really makes NumPy fast. The type checking is part of it, but it's not only it. All right. So, it's much faster than this. It's cool. Now, let's look at this. Once the array gets big enough, then suddenly we get a factor of two cost in compute time per element. So, do you have an idea what this may be due to? Excellent. It's cache. So, 10 to the 5 elements. That's approximately the size of a CPU cache. You could do the computation, you know, these are probably float 64, so they're 8 bytes. Right. So, the problem is that memory is much slower than the CPU. So, your goal when you want fast calculation is to get things in the CPU as fast as possible. And here, you're starting to get out of the cache. So, that's bad news for array computing. But there's even worse. If we do a slightly more complex operation, so, tf times IDF minus 1, then the cost actually starts increasing. So, what's going on here? Well, if we look at what's happening, Python is computing tf times IDF and creating an array that we don't see. I'm going to call it temporary array. And then it's removing one from this temporary array. So, what we're doing here is that we're really moving things in and out of the cache hugely. So, we get pretty bad cache and validation here. So, and this is because of the Python computing model. It's just the way Python works. So, we can find this and we see that there's a huge cost to removing this one in terms of computation. Okay. We can play a trick. We can enroll this and do things slightly better by using an in-place operation for the second one. So, the idea is that we're reusing the allocation of the temporary array. We're not allocating arrays twice. If we do this, it gets much, much faster. And the reason is we've become much better with cache. We invalidate less cache. So, if we look at our graph, we can do an in-place. So, it's still going up with a number of elements. But because of this in-place operation, it's faster. So, what we have here is really a compilation problem. We want to go from this expression to this expression. So, we want to do things like removing or reusing temporaries or we might want to actually chunk operations. So, if I could do a for loops that does loops on the data size of the right size, then it would be fast. And so, for instance, numX, which is something that's mostly developed by French and Scalten, can do this using string expressions. So, that's an example. NumX evaluates tf times idf minus one. And without being clever, numX was clever for us, you get the speedup. Okay. So, you get the same speedup as numX in place. All right. So, have you heard of numX? So, numX is basically a just-in-time compiler or a compiler that does these kinds of things with byte coding inspection. Another approach is a nice package called lazy array that basically builds an expression but doesn't evaluate it and then evaluates it when you call it. Okay. So, basically, it's going around the Python evaluation model. And I'd like to point that this is actually not a problem that is specific to scientific computing. It's a similar problem to things like grouping and paginating SQL queries. I'm talking about things I don't know here, right. So, just to summarize the kind of things you could give to your CTO or your CEO, if it's too small, you get overhead. Overhead of Python, overhead of creation of arrays. If it's too big, you fall out of cache. So, your optimum lies in the middle. We probably want to be lying here because that's where big data is, that's where the magic is, the money is. See people taking pictures. This part, right. Okay. What if we need flow control? For instance, we don't want to divide by IDF when it's zero. So, I told you we don't use flow control. So, what we're going to do is that we're going to do an expression, that's a test expression. It's basically saying that where IDF is zero, that returns an array of Booleans, then I will put TFIDF to zero. Okay. So, that way we don't need flow control. Cool. So, suppose we're looking at ages and a population and I want to compute the mean age of males versus females. So, then I can select the age array with a gender array and say, well, where gender is equal to male, I'll compute the mean and I'll subtract where gender is equal to gender. That's a typo. Now, this is really starting to look like a database, right? We're really trying to do selections. So, on top of NumPy, there's a library called Pandas that is really something in between arrays and an in-memory database. So, it's been huge by the community because it's fantastic for these queries and this data messaging. For numerical algorithms, it's maybe less fantastic because anyhow, we're going to be falling back to NumPy. Okay. So, what you guys are going to tell me is you're not really doing Python, right? You're doing a bit of beautiful Python code that sits on top of lots of ugly fortune in C++ routines and that gives you scalability but it's installation problems. But then I realized that most web development is actually some beautiful Python code that's sitting on services like a database that could be in C++, Java, and God knows what, in Node.js. And that actually gives deployment problems. So, you don't have compilation problems, you have deployment problems. So, we're not that different, right? We're just struggling with similar things instantiated in a different manner. So, these days, I like to think as NumPy as the scientist equivalent to an ORM and I don't use ORMs so I don't know what I'm talking about. So, numerics, as we've seen, are really efficient because we apply them to regular space data. But NumPy, the way it works, creates cache misses for bigger arrays. So, we need to fight to remove temporaries and maybe chunk data. If we do queries, then they're going to be really efficient if we can use indexes or trees. So, typically, databases do that. But we're going to need to group queries. So, all these are compilation problems. But compilation is un-Pythonic. So, we can do, for instance, we could think of computation in query language. That's a bit what NumExpert does. But I really hate domain-specific languages. And each time I try to use SQL, because I'm not a web developer, I get it wrong and I get annoyed. And the other problem is that NumPy is actually extremely expressive. The amount of things that you can do with NumPy or with related tools is extremely varied. So, I don't think that's a good way to go. And anyhow, I like Python. I want to be doing Python. So, one approach is to hack Python. And a really cool example is SponiORM. Who knows SponiORM? It's web development. You should be better than me at that. So, what SponiORM does is it will compile Python generators to optimize SQL queries. So, you're going to write something that looks like a Python generator. It's going to do byte code inspection. Well, ASD inspection, I believe. And then grab the ASD and build a SQL query on top of this and optimize it via compilation and grouping. So, that's really cool. It's no longer really pure Python, but it's really cool. So, I'd like to draw your attention to something that is happening a lot in the big data world, which is something that's known as Spark. And it's a rising star in Scala. So, basically, on top of the JVM, on top of the Java world. And it combines two things. It combines a distributed store. So, people don't realize this usually. But it combines a distributed store, which is some form of database-like store and a computing model. It plugs them together. And it allows it to do distributed computing in a reasonably efficient way. Now, the thing is that we, so PyData world, are actually much faster when the data fits in RAM. And the reason is that we're really representing data as regular space arrays. And so, then we're going extremely fast. Whereas the Java world has a lot of pointer references. Okay? So, if we want to scale up, maybe we're going to have to do operations on chunks. Maybe we need to chunk the data. And then maybe in parallel or in Siri, does matter, compute things on arrays that say fit in RAM or fit in cache. Now, this is great for certain computing patterns. Things that, for instance, are known as extract, transform, and load. But if you're doing multivariate statistics, which machine learning is about, you're really combining information from all over the arrays. This, you're really learning to, that the interaction between machine, the term machine, and learning, those two together, make a topic. So, the kind of compute graph that you get are horrible. And it means that things like out-of-core operations, which is basically what we're doing when we're chunking data, are not efficient. There's no data locality. So, one approach is to do algorithm development, which is what I do. So, I'm happy. And the idea being that you use online algorithms, so it's basically, you don't use the same algorithm, you use an algorithm that works on a stream. And then you start chunking the data in the algorithm. So, if you've heard of deep learning, then the number one algorithm that's used in deep learning is stochastic gradient descent, and that's how it works. That's how people can apply deep learning, which is extremely computationally expensive, to huge data sets. So, back to data science. So, I've shown you how we can go from a matrix of term document to a factorization. And there's magic here, right? So, there's an algorithm. I did not discuss how it works. We just imported it from scikit-learn. What the scikit-learn devs do is that they take herbal papers full of math expressions, and drinking a lot of coffee, they turn it into this code. It's actually really hard, by the way. People have been asking me yesterday, so why do we still use code that's written 40 years ago or 20 years ago in Fortran? Because writing stable numerical code is extremely hard, and no better code has been written so far. So, the reason that we, scikit-learn and PyData, have been able to do this is thanks to the high-level syntax of Python and everything I've presented here. So, the reason all this is important is because it reduces our cognitive load and allows us to do math. All right. Let's talk a bit about something else, the numerics. And let's talk about the future and about what's going to make PyData great again, maybe. So, I think that we've been seeing recently that data flow and computation flow are crucial. So, you can have the simple data parallel problems, you can have the messy compute graphs, you can have online algorithms. And so, data flow engines are actually popping up everywhere. So, for instance, maybe you've heard of DOSC. So, DOSC is a pure Python static graph compiler. So, it will represent a set of calls, a function calls on data as a graph and compile it. And then use a dynamic scheduler on this to do parallel and distributed computing. Okay? So, it's really nice, except it's basically static, which means I can't add things to my graph. Okay? Another tool that people use in deep learning is the ANO. And people probably don't realize, but it has expression analysis in pure Python, builds a graph of operations and optimizes this. TensorFlow is a C++, I believe, library, developed by Google to do deep learning, and it also builds a graph of operations. So, graphs of operations are there in many, many different libraries below the hood. I believe that Python should really shine here, because it's reflective, we can do some form of media programming, and because of the recent async developments, because I think the future is for parallel and distributed computing. So, as Nathaniel Smith, who is an MPy developer, said, Python is the best numerical language out there because it's not a numerical language. And I believe this is extremely true. Now, we have a bit of a problem here, is that the API is really challenging, because we're doing algorithm design, and we can't really do what you guys have been doing in something like Django, where there's basically an inversion of control, and you're no longer writing imperative code as you would do your buying into a framework. And I don't believe that we can write really complex algorithms like this, there's just too much cognitive overload. But it's just an API design. We'll solve it. So, in terms of ingredients for future data flows, I think distributed computation and runtime analysis are really important things. And for this, I think reflexivity is central. It's really useful for debug, by the way, if I'm not in Python, the number one thing I miss is the ability to debug. And I can debug in a high-level way, which means I can debug things like numerical instability in my algorithm. That's really hard to do. You've got something that blows up somewhere in terms of numerical precision. Python is fantastic to debug this. I can do interactive work, which is how most data scientists work. This will enable us, this already enables us and will enable us more code analysis, which is going to be really important for being efficient. And it gives us persistence, which is extremely important for parallel computing. Because when you're doing parallel and distributed computing, you need to move data, well, you need to move objects around between different computers, and you need to move code. And for this, you need reflexivity. So, I realize that, so we've been relying on pickle. Distributed computing has been relying hugely on pickle. And the idea is that it uses it to distribute the code and the data between the different workers. But we can also use it to serialize intermediate results. So that's one way of doing computation on data where all the intermediate results might not fit in RAM. It can be made very easily with Python. And another thing that we do is that we actually use pickle to get a deep hash in the sense of a cryptographic hash of any data structure. So that's really nice because it allows you to see if things have changed or not. So to avoid re-computation. Now, the problem is that pickle is actually very limited, the way it's implemented in the core library. For instance, there's no support for lambdas. And these things are not fundamental limitations. They're trade-offs, basically. And so there are variants of pickle, like like Dill or Cloud Pickle. And I must say that I really like one of those two or maybe IDs from one of those two to go in the standard library because it's actually limiting hugely parallel computing not to be able to pickle everything. So I realize we're never going to be able to pickle absolutely everything. And I also realize that I can write code that always pickle. That's what I do. But when I give this to not very advanced user, he will at some point write the code that doesn't pickle. So for me, by the way, this is more important than the Dill. That may be surprising, but when you get to know distributed computing, well, these things are a problem. Data exchange, basically. Now, we have this small library that we call Joblib that allows us to do ingredients for data flow computing. And one thing it does is a very simple parallel computing syntax, which is basically a syntactic sugar for parallel for loops. And behind the hood, it uses threading or multiprocessing or just about any back end you can plug in. You can plug in your own back end in there. It does fast persistence. So it's basically a subclass of pickle that does clever things for an empire raise. And it gives primitives for out-of-core computation. The reason I'm pointing this out is it's actually very non-invasive syntax and paradigms. So this, with a library like Joblib, we can write algorithms. And it's actually used inside scikit-learn, even though you may not know it. It's being designed to be fast on an empire raise. And it's getting more and more an extendable back end system. So I'm looking forward to a world where we can use things like salary to basically distribute computation from scikit-learn in more of a web development environment. I don't know if it's a good or a bad idea, but I'd like to try it. So I think the Python VM is great. It's awesome. And one of the reasons it's great is because it's simple, which is what a lot of people have been criticizing. So, for instance, the Java world tells us that they have software transactional memory, and it's really cool. It would be nice for Python, but I personally really need to use foreign memory. I need it. And interestingly, Java has gained recently Jmalloc to allocate basically foreign memory. We'd like better garbage collection. We really would like. But just about every C extension relies on reference counting. And the reason is it's actually very easy to manipulate the reference counting if you're not sitting in the VM. So basically, the Python VM is something that I can manipulate without being inside it. Which means that it's really great to connect to combined languages. And I'm talking to people in the conference. Many people actually use this. Many people use libraries that have been developed in another language through Python. And I'd like to draw a bit of attention to Scython. Who knows Scython? Good. Who uses Scython? Good. It really gives us the best of C and Python. You can add types for speed. And they've done things so right that when you add, when you type an empire array, it basically becomes a float star. So a float array in C. So super fast. But you can also use it to bind external libraries. And it's surprisingly easy. The good thing is suddenly you're working with C libraries or you're working with C-like code without any malloc free of pointer admittix. Which is for me the none and one problem of these languages. So I see this as an adaptation layer between the Python VM and C. And it's really a fantastic tool. By the way, I think everybody should be writing C extensions using Scython. Because it's an abstraction over the C-Python API. So for instance, you can write code that's very readable. And that compiles with Python 3 and Python 2. Even though there's been a lot of changes in the C-Python API. So it's also good for the nonpy core developers because they'd like to change things in the C-Python API. And if everybody writes Scython, they will be able to. Because Scython will do the impedance matching. Okay. So we scientists can work with web developers. And we really actually get to love each other, I believe. Actually, I'm really serious here. I actually really enjoy people who are not doing science in the Python community. They're first they teach me things, second they make fantastic tools that I can use. And so I'd like our tools to be useful for us. And I'd like to point out that scikit-learn is actually really easy machine learning. It's really a very simple syntax. Basically, you import an object and it's a magic object that will do classification through recognition of things. You instantiate it. And then you give it data. So it's basically matrices, right? We only do matrices. And so you have to figure out how you convert your own data to matrices. And then you call fit. And then you call predict. Okay. So people, one of the successes of scikit-learn is this encapsulation. People have really loved the fact that the classifiers are semi-black box. So they can use it without fully understanding it. So that's another thing that Python has given us, is object-oriented in a really, really cool model that allows to do object-oriented programming without, say, crazy class diagrams. And another thing that we've used hugely is what people have called documentation-driven development. So there was a talk about this. So to try to make this API as simple as possible. What I'm trying to get at here is that we're trying to give you a high-level, simple API to reduce your cognitive load. Just like Python and NumPy reduces our cognitive load when we're implementing these algorithms. So we're all doing very different things here. And we can all benefit from each other. But we can do this only if we're really careful to reduce each other's cognitive load on what the other does not understand. I think that's extremely important. So it's important to be didactic outside of one's own community. And actually, Python is really good at this. The Django documentation is known as being really excellent. Python worries about syntax being beautiful. So to do this, we need to do things like avoiding jargon. So machine learning is really bad. It's full of jargon. We in scikit-learn try not to have too much. We need to prioritize information. And so, for instance, students that are applied math students and learn about numerics, I hate to tell you they don't care about Unicode. Even the French ones that have umlauts on their first name. One recommendation I have for people that do API design is build your documentation upon very simple examples. And examples that run. So one thing that we do is that we have this thing that's called Sphinx Gallery that basically uses Sphinx. Sphinx is awesome. To build our documentation and running all the examples. So it means that the examples must run. They must run fast. It means they must be small enough to run. And so I think that's helped a lot both the documentation but also the API design of scikit-learn. All right. To wrap up, I think it's because of the interaction between people like scientists and people who are not scientists, whether they're web developers or DevOps or anything. Have I been censored? Oh, okay. Cool. Yeah. What was I saying? Well, anyhow, the Python language in its VM is the perfect tool to manipulate low-level concepts. Whether you know their arrays or actually you can manipulate things like trees in C with high-level wording. And I personally think it's a personal opinion. But this has been key to the recent success of Python. Python has been growing hugely. And when you look at how people are using it at some point, they're plugging to something low-level very often. Dynamism and reflexivity are crucial because it enables metaprogramming and debugging. But we also find that we need for compilation, for speed. So then there's this tension between dynamism and compilation. And I have the feeling it's everywhere. It's also in web development with, say, compiling SQL queries. And I'm extremely excited about the peps that Victor is pushing forward. Like guards on internal structures to allow checking at runtime for modifications. So that will allow us any kind of hacks that we do on the code to be invalidated if the environment changes. Or the pep for functional specialization. Finally, I think that PyData has gained and will gain hugely from database world and the concurrency that are developed a lot in the world and DevOps world. But I think it can also give back things like knowledge engineering and AI, which are really, you know, growing hugely. And just in case you haven't noticed, data science is disrupting just about every job. That you're doing. So it's cool that there's data science in Python. That's all I have. Thank you. Thank you very much, Gail. Yeah, great keynote, great insights and a little different world. So questions. Raise your hand. Mike. Give the mic to Mike. Thanks. Very interesting keynote. One thing I just, it's not a question, just a statement. The scientific world was a very early adopt of Python 3. I think several years ago, most of the scientific stack was in Python 3, which is a great thing. I think you can use pretty much any good scientific packaging in Python 3. That's something I want to add. Yeah. I agree. And the biggest cost of Python 3 first was the change of the C Python API. And so actually people still in niche applications have code that doesn't run on Python 3 because of the C Python API. But all the main libraries by vast margin run on 3. And everything I do runs on 3 and 2. Question? Okay. You probably get that a lot, but I will ask anyways. Have you heard about PyPy? I was actually trolling a bit in my talk. Yeah, I know a lot about PyPy. So to give a bit of background, my brother studied language theory. So we've had crazy discussions all the time. So yeah, I know a lot about these things. And part of the things I wanted to talk in my talk was the fact that it's not only about type checking. NumPy is not only about type checking. It's about the memory model. And I think by the way PyPy has progressed hugely in the sense, which is it is no longer trying to say I'm going to control the memory for everything, which historically was a big roadblock for us. I mean, I could not believe that PyPy would be useful for scientific computing because for a long time I heard that the end goal of PyPy were things like software transactional memory, which is really cool by the way, but will cost us things a lot in our world. And the other thing is we're not going to get rid of the compiled code because there is so much history in making those algorithms really good. And it's extremely hard. But I do believe that what the PyPy world is doing, which is a lot of analysis on the code, is extremely, extremely useful. That I absolutely believe. Thank you very much. Any more questions? Are they in the back? Okay. Sorry, Daria's faster. I'm sorry. But yeah, go ahead. You keep referring to our world, your Python world. Is the division that clear? Not for me. Not for me at all. I've got personal friends in all the communities. I use all kind of different tools, but I'm afraid there is a division. And I'd like to think that it's fueled by different trade-offs. And I'd like to fight it. By the way, I don't think it's useful. But when you hear things like Condat, which is so a backager for Python and other things, and the reason it was created was basically because, and the way I think of it is the reason it was created was because the scientific crowd was unable to explain the struggles it was having with the packaging tools in Python and just went on and did their own stuff. Now, the good thing is that some people were, some people actually came back and worked and now, I believe, should be able to work fine. But that's one example of the division. And I think it exists and I think we need to fight it because our value, that's something I really believe in. Our value is the fact that we're diverse and we're able to work together. Great question. How do you see the scenario in five, ten years, Python comparing to other languages like R or the old Wolfram or things, new things. So R or Wolfram language. So you're talking in the scientific world. Yeah. All right. I'm going to be extremely opinionated. I think R will die. So to give you background, when we started scikit-learn, almost seven years ago, everybody would walk up to us and say, you're crazy. Everybody does R or machine learning. Everybody does math lab. Okay. Seven years down the line, nobody is mentioning this. So by the way, R is awesome, not as a language. It's a horrible language. But in terms of libraries, I told you, numerical algorithms are really hard. Well, R has a crazy amount of them. And for me, as statistician, R is the reference. But the value of data analysis is not only in numerics. It's in combining things. And I think we have an edge here. So math lab, yeah, I think we're eating math lab slowly. And they're fighting back. I'm getting emails on a monthly basis. Get a training to come to math work to see how we're cooler than Python. But the fact that we're going up, whereas they're pouring money to fight us, is telling me something. Maybe it's going to take a bit of time. But in the scientific world, I mean, the strong container would be Julia. Julia is a type language that is able to do fantastic type inference and combine to extremely fast code. It uses the LLVM. I really don't like it. I mean, it's a fantastic language. It's awesome. Fantastic language design. I really don't like it because it's a numerical language. And they don't think of it that way. But the whole community is a numerical community. And I'm worried that it's going to pay itself in the corner. Yeah, Gil, thanks for the fantastic talk in the fantastic library. Scikit-learn is only one of the libraries in the Scikit family. There is also Scikit Image and Scikit Beal. What is your relationship with the Scikit family? So that's very historical. We used to have, that's like 2008, there used to be a Scikit with a Nes, namespace package. If you guys remember namespace packages, they're one of my nightmares. In SciPy. And that's how we all started. And then we took it out of SciPy because SciPy was getting too big. And then we got rid of the namespace package. It used to be called scikit.learn. And we turned it into scikit-learn. And it means scientific kit. It's very historical. But what's your relationship? I guess we're friends. We're good friends. Okay, last question. Fabio. Yeah. One, it's sort of a question to point out. One specific thing about Konda that is beyond Python and beyond PIP is where it comes to struggle. People come to struggle with non-Python specific stuff. So if you want a database or a specific, you want to install a stack with Node.js and Python. You can actually do that. So it actually sits on top of Python. It's more like the apt-get than PIP. So in this case, I'm not really sure Python should have something in the standard library that actually does that. What's your opinion on that? So I completely agree. So the comment is Konda is more than Python, basically. And I know this, by the way. But historically, it's not been marketed like this. I mean, I've heard way too much don't use PIP use Konda. Which is, I mean, I hear this in my lab, by the way. And I fight it. And the other thing is I haven't seen much work go from Konda to, I'm not even talking about contributing back to PIP, but I'm talking about explaining what was being nerd. And I think it's extremely important. I would really like Konda Forge. I'm going to say bold statement, but I would like Konda Forge for Python to either die or to push automatically to PyPy. Pushing automatically to PyPy would be awesome. But we need one place where we can tell everybody go and get your stuff. And we need this place to be good and we need to work together. And in a sense, Konda has achieved this because one thing it has created is it's created and maybe an insight here. At least it's shown that you can do things better. But you need to go all the way back and get back in the wider Python ecosystem, the improvements, because it's all going to benefit us. Okay. So we have one more thing to announce. So please don't run away after you've given an fantastic and enthusiastic applause for Gail's keynote. Thank you very much, Gail.
Gaël Varoquaux - Scientist meets web dev: how Python became the language of data Data science is a hot topic and Python has emerged as an ideal language for it. Its strength for data analysis come from the cultural mix between the scientific Python community, and more conventional software usage, such as web development or system administration. I'll show how and why Python is a easy and powerful tool for data science. ----- Python started as a scripting language, but now it is the new trend everywhere and in particular for data science, the latest rage of computing. It didn't get there by chance: tools and concepts built by nerdy scientists and geek sysadmins provide foundations for what is said to be the sexiest job: data scientist. In my talk I'll give a personal perspective, historical and technical, on the progress of the scientific Python ecosystem, from numerical physics to data mining. What made Python suitable for science; How could scipy grow to challenge commercial giants such as Matlab; Why the cultural gap between scientific Python and the broader Python community turned out to be a gold mine; How scikit-learn was born, what technical decisions enabled it to grow; And last but not least, how we are addressing a wider and wider public, lowering the bar and empowering people. The talk will discuss low-level technical aspects, such as how the Python world makes it easy to move large chunks of number across code. It will touch upon current exciting developments in scikit-learn and joblib. But it will also talk about softer topics, such as project dynamics or documentation, as software's success is determined by people.
10.5446/21145 (DOI)
title is An Introduction to Deep Learning. So please welcome Jeff. Good morning, thanks for coming. Okay, I'd like to start by thanking Tariq Rashid for giving his excellent gentle introduction to neural networks. I'm going to build upon that and hopefully show you how to develop some of the networks that have been used to get to the really good computer vision results that we've seen recently. So our focus is mainly going to be on image processing this morning. And this talk is, I'm going to cover more the principles and the maths behind it than the code. And the reason is it's quite a big topic, there's quite a lot to go through. I've got to squeeze it into an hour. So a quick overview of what we're going to go through. We are going to discuss the theano library which is the one I personally use, although there's also libraries like TensorFlow. We're going to cover the basic model of what is a neural network, just building on Tariq's talk. Then we're going to go through convolutional networks. And these are some of the networks that have been getting the really, really good results that we've seen recently. Then we'll look briefly at lasagna, which is another Python library that builds on Tariq's theano to make it easier to build neural networks. We'll discuss why it's there and what it does. And then I'll give you a few hints about how to actually build a neural network, how to actually structure it, what layers to choose. Just to say you have a rough idea on how to train them, just a few hints and tips to practically get going. And then finally, time permitting, I'll go through the Oxford Net VGG network, which is how to use a pre-trained network that you can download under Creative Commons from Oxford University. You can't use that yourself, because I'll go through why it's useful to sometimes use a network that somebody else has trained for you and then tweak it for your own purposes. Now the nice thing is there are some talk materials. This is based off a tutorial I gave at PiData London in May. And if you check out the GitHub repo there, Bright Fury Deep Learning tutorial PiData 2016, you'll find that there's the GitHub repo there. All the notebooks are viewable on GitHub, so you should be able to see everything there in your browser. I would ask, though, that please, please, please do not try and run this code during the talk. And the reason is, is because when you run the stuff that uses the VGG Net Oxford Net models, that will need to download a 500 meg weights file. And you will kill the Wi-Fi if you will start doing that. So please do that in your own time, if that's okay. Also, yep, there's, if you want to get more in depth about the Thiano and Lasagna, I'll put up some slides. If you check out my speaker deck profile, there'll be this talk slides, and there'll also be intro to the Thiano and Lasagna as well. So that will give you a breakdown of Python code using Thiano and Lasagna, what it does and how to use it. And furthermore, if you don't have a machine available or you don't want to set it up yourself, I've set up an Amazon AMI for you, so if you want to go use one of their GPUs, you can go and grab a hold of that and run all the code there. Everything's all set up, and I hope it's all relatively easy to get into. All right, now time to get into the meat of the talk. And what better place to start than ImageNet? ImageNet is an academic image classification dataset. You've got about a million images. I think it might be even more now. They're divided into a thousand different classes, so you've got various different types of dog, various different types of cat, flowers, buckets, whatever else, whatever you can come up with, rocks, snails. And the way the ground truths, as in, you've got a bunch of images that have been scraped off Flickr, and you've got to provide a ground truth of what each image is. And the way all those are prepared is they went and got some people to do it over Amazon Mechanical Turk. Now, the top five challenge. What you've got to do is you've got to produce a classifier that when given an image will produce a probability score of what it thinks it is, and you score a hit if the ground truth class, the actual true class is somewhere within your neural network or whatever it is used. It's top five choices for what it thinks the image is. And in 2012, the best approaches at the time used a lot of handcrafted features. For those of you familiar with computer vision, these are things like SIFT, hogs, fisher vectors, and they stick it into like a classifier, maybe a linear classifier, and the top five error rate was around 25%. And then the game changed. Kasevsky, Sutskiver and Hinton, in their paper ImageNet classification with deep convolutional neural networks, bit of a mouthful, they managed to get error rate down to 15%. And in the last few years, more modern network architectures have gotten down further. Now we're down to about 5% to 7%. I think people like Google and Microsoft even got down to 3% or 4%. And I hope that this talk is going to give you an idea of how that's done. Okay, let's have a quick run over theano. Neural network software comes in two flavors, or it's kind of on a spectrum, really. You've got the kind of neural network toolkits at the quite high level at one end, and at the other end, you've got expression compilers. A neural network toolkit, you specify the neural network in terms of layers. With expression compilers, they're somewhat lower level, and you're going to describe the mathematical expressions that to each covered that are behind the layers that effectively describe the network. And it's a more powerful and flexible approach. Theano is an expression compiler. You're going to write numpy style expressions, and it's going to compile it to either C to run on your CPU or CUDA to run on an NVIDIA GPU if you have one of those available. And once again, if you want to go get an intro to that, there are my slides there that I mentioned earlier. There's a lot more to Theano, so go check out the deeplearning.net website to learn more about it and find out about that gives you the full description of the API and everything it'll do, some of which you may want to use. There are, of course, others. There's TensorFlow developed by Google, and that's gaining popularity really fast these days, so that may well be the future. We'll see. Okay. What is a neural network? Well, we're going to cover a fair bit of what Tariq covered in the previous talk, but it's got multiple layers, and the data propagates through each layer, and it's transformed as it goes through. So we might start out with an image of a bunch of bananas. It's going to go through the first hidden layer and get transformed into a different representation, and then get transformed again to the next hidden layer. And finally, we end up with, assuming we're doing an image classifier, we end up with a probability vector. Effectively, all the values in that vector sum up to one, and our predicted class is the corresponding row in the probability vector, element in the probability vector, rather than the highest probability. Okay. And this is what our network kind of looks like. We see there are weights that you saw in the previous talk that connect all the units between the layers, and you see our data being put in on the input and propagating through and arriving at the output. Breaking down a single layer of a neural network, we've got our input, which is basically a vector, an array of numbers, and multiplied by our weights matrix, which are the crazy lines, and then we add a bias term, which is simply an offset. You add a vector, and then you have our activation function or non-linearity. Those terms are roughly interchangeable. And that's the output layer activation, is what then goes into the next layer, or the output if it's the last layer in the network. Mathematically speaking, X is our input vector, Y is our output. We represent our weights by their weights matrix. That's one of the parameters of our network. Our other parameter is the bias. We've got our non-linearity function. Normally, these days, that's rarely rectified in a unit. It's about as simple as they come. It's simply max of X and zero. That's the activation function that's become the most popular recently. In a nutshell, Y equals F of WX plus B, repeated for each layer as it goes through. And that's basically a neural network. Just that same formula repeated over and over once for each layer. And to make an image classifier, we're going to take the pixels from our image, we're going to splat them out into a vector, to stretch them out row by row, run through the network, and get our result. So in summary, our neural network is built from layers, each of which is a matrix multiplication, then our bias, then our non-linearity. Okay. And how to train a neural network. We've got to learn values for our parameters, the weights and the biases for every layer. And for that, we use back propagation. We're going to initialize our weights randomly. There'll be a little more on this later. We're going to initialize the biases all to zero. And then each example in our training set, we want to evaluate, as Tariq said, we've got to evaluate our network's prediction, see what it reckons the output is, compare it to the actual training output, what it should produce given that input. We've got to measure our cost function, which is roughly speaking the error. That's the difference between what our network is predicting and what it should predict, the ground truth output. Now, the cost function is kind of important, so we'll just discuss that a little bit. For classification, where the idea is given an input and a bunch of categories, which category best describes this input, our final layer, we use a function called softmax as our non-linearity, our non-linearity or activation function, and outputs a vector of class probabilities. The best way of thinking about it is, let's say I've got a bunch of numbers and I sum them all up and I divide each element by the sum. That will give us roughly the proportion or probability, assuming all of our numbers to start with are positive. But they can also go negative in a neural network, so the softmax is at one little wrinkle. Before something, what we do is we take our input numbers, we compute the exponent of them all, and then we sum them up and we divide the exponent by the sum of the exponents. That's softmax. And then our cost function, our error function, is negative log likelihood, also known as categorical cross entropy. To do that, you've got to take the log of you, let's say you have an image of a dog, you run the image to the network, you see what the predicted probability is for dog, you take the log of that probability, which is going to be negative. If it's predicted probability, the log of that's going to be zero. If it's like 0.1, it's going to be quite strongly negative. You negate that log, and so the idea is if it's supposed to output dog, it should give a probability of one. If it's giving a probability of less than that, the negative log of that will be quite positive, which indicates high error. So that's your cost. Now regression is different, rather than classifying an input and saying which category closely matches this, you're trying to quantify it. You're measuring strength of something or strength of some response. Typically, without your final layer, it doesn't have an activation function, it's just identity linear. And your cost is going to be sum of squared difference. Then what we've got to do with our neural networks is we've got to reduce the cost, reduce the error, using gradient descent. And what we have to do is we have to compute the derivative, the gradient of the cost, with respect to our parameters, which is all our weights and all our biases within our layers. The cool thing about it is the Tiano does the symbolic differentiation for you. I can tell you right now that you don't want to be in a situation where you have this massive expression for your neural network, and you've got to go and compute the derivative, the cost, with respect to some parameter by hand, because you will make a mistake. You will flip a minus sign somewhere, and then your network won't learn and debugging it will be a goddamn nightmare because it will be really hard to figure out where it's gone wrong. So I would recommend getting a symbolic mathematical package to it for you, or use something like Tiano that just handles it all. Literally, you write that code there. Decost by de-weight is Tiano grad cost weight. And other talk is do this as well, just to save you time and sanity. Then you update your parameters. You take your weights, and you subtract the learning rate, which is lambda, multiplied by the gradient. And I'd generally recommend that learning rate should be somewhere in the region of 1 times 10 to the minus 4 to 1 times 10 to the minus 2, something in that region. You also going to, you typically don't train one example at once. You're going to take what's known as a mini batch of about 100 samples from your dataset. You're going to compute the cost of each of those samples, average all the costs together, and then compute the derivative of the average cost with respect to all of your parameters. And then the idea is you end up with an average. And that, the idea is that it means that you get about 100 samples processed in parallel. And that means when you run it on a GPU, that tends to speed things up a lot because it uses all of the parallel processing power of a GPU. Training on all the examples in your entire training set is called an epoch. And you often run through multiple epochs to train on your network, something like 200 or 300. So, in summary, take a mini batch of training samples, evaluate, run them through the network, measure the average error or cost across the mini batch, and use gradient descent to modify the parameters to reduce the cost and repeat the above until done. All right. Multi-layer perceptron. It's a simple neural network architecture, and it's nothing we haven't seen so far. It uses only what are known as fully connected or dense layers. And in a dense layer, each unit is connected to every single unit in the previous layer. And to carry on, to pick up from Tariq's talk, the MNIST handwritten digits dataset is a good place to start. A neural network with two hidden layers, both the 256 units, after 300 iterations gets about 1.83% validation error. So, it's about 98.17% accuracy, which is pretty good. However, these handwritten digits are quite a special case. All the digits are nicely centered within the image. They're roughly the same position, scaled right about the same size. And you can see that in the examples there. And our fully connected networks have one weakness. There's no translational invariance. If you want to, like, take an image and detect a ball somewhere in the image, what it effectively means is it will only learn to pick up the ball and the position where it's been seen so far. It won't learn to generalize it across all positions in the image. And one of the cool things we can do is if we take the weights that we learn, and we say, take one of the neurons or one of the units in the first hidden layer, and take the strengths of the weights that link them to all the pixels in the input layer and visualize it, that's what you end up with. So, you see that your first hidden layer, the weights, are effectively from a bunch of little feature detectors that pick up the various strokes that make up the digits. So, it's kind of cool to visualize it, but that shows you how the dense layers are translationally dependent. And so, for general imagery, like, say, if you want to detect cats, dogs, various eyes and everything that makes up the various little creatures and all the various things, you've got to have a training set large enough to have every single possible feature and every single location of all the images. And you've got to have a network that's got enough units to represent all this variation. Okay, so you're going to have a training set in the trillions, a neural network with billions and billions of nodes, and you're going to have enough, you're going to need enough about all the computers in the world and the heat death of the universe in order to train it. So, moving on, convolutional networks is how we address that. Convolution. It's a fairly common operation in computer vision and signal processing. You're going to slide a convolutional kernel over the image. And what you do is you imagine, say, the image pixels are in one layer, you're going to take your kernel, which has got a bunch of little weights, a bunch of little values, and you're going to multiply the value in the kernel by the pixel underneath it for all the values in the kernel. And you're going to take those products and sum them all up. You're going to slide the kernel over one position and do the same. Slide it over this, do the same. And what you end up with is an output. And while they're often used for feature detection, so a brief detour, cabour filters, if we produce these filters which are a product of a sine wave and a Gaussian function, you end up with these little wave, these little soft circular wave things. And if you do the convolution, you'll see that they act as a feature detector that detects certain features in the image. So you can see how it roughly corresponds. You can see the ones with the vertical bars there roughly pick out the vertical lines in the image of the bananas. The horizontal bars pick out the horizontal lines and you can see how convolution acts as a feature detector. And they use quite a lot for that. So back on track to convolutional networks, back, we'll have a look for a quick recap. That's what our fully connected layer looks like with all of our inputs connected to all of our outputs. In a convolutional layer, you'll notice that the node on the right is only connected to a small neighbourhood of nodes on the left. And the next node down is only connected to a small corresponding neighbourhood. The weights are also shared, so it means you use the same value for all the red weights and for all the greens and for all the yellows. And the values of these weights form that kernel, a feature detector. And for practical computer vision, whether you're producing the kernels manually or learning them like in a convolutional network, more than one kernel has to be used because you've got to extract a variety of features. It's not sufficient just to be able to detect all of just the horizontal edges. You want to detect the vertical ones and all the other various orientations and sizes as well. So you've got to have a range of kernels. So you're going to have different weight kernels. And the idea is you've got an image there with one channel on the input and about three channels on the output. Or what you might find in a typical convolutional network, you might actually have about 48 channels or 256. I'll show you some examples later of some architectures and you end up with some very high dimensionality in the sort of a channel's output. Okay. So each kernel connects to all pixels and connects to the pixels in all channels in the previous layer. So it draws in data from all channels in the previous layer. However, the maths is still the same. And the reason is because a convolution can be expressed as a multiplication by a weight matrix. It's just that the weight matrix is quite sparse. But the maths doesn't really change as far as conceptually. And that's fortunate for us because it means that the gradient descent and everything we've done so far just still works. As for how you go about figuring that out, I just recommend letting Theana do that for you. I wouldn't hurt myself. I wouldn't recommend it. And there's one more thing we need down sampling. So typically if you've done it, if you've worked in Photoshop or GIMP or any of these other image editing packages, you might want to shrink an image down by a certain amount, say by 50%. You want to shrink the resolution. And for that we use two operations, either max pooling or striding. Max pooling, what you're going to do is you can see that the image up there is divided into four colour blocks. Say the blue block has four pixels. What we do is we take those four pixels, we pick the one with the maximum value and we use that. So rather than averaging, we just take the maximum. And that's max pooling. And it down samples the image by the fact that PFP is the size of the pooling. And it operates on each channel independently. The other option is striding. What you do there is you effectively pick a sample, skip a few, pick a sample, skip a few. It's even simpler. It's often quite a lot faster because what you can do is a lot of the convolution operations support strided convolutions where rather than taking sort of producing the output and throwing some away, they just effectively jump over by a few pixels each time. So that's faster and you get similar results. So moving on. Jan LeCun used convolutional networks to crack the, to solve the image, the MNIST dataset in 1995. And this is a simplified version of his architecture. What you've got is you've got 20 kernels. You've got this 28 by 28 input image, one channel because it's monochrome. You've got 20 kernels, five by five. So they reduce the image to 24 by 24, but it's now 20 channels deep. Max pool shrink it by half. Then we have 50 kernels, five by five. And now we've got a 50 channel image, eight by eight. Max pool shrink it by half. And then we flatten it and do a fully connected dense layer to two, five, six units. And finally, fully connected to our 10-unit output layer for our class probabilities. After 300 iterations of the training set, we get 99.21% accuracy. 0.79% error rate is not too bad. And what about the learned kernels? It's interesting to think about what the feature detectors are picking up. If you look at a big dataset like ImageNet, this is the Khrzebsky paper I mentioned right at the beginning. These are the kernels that get learned by the neural network. And for comparison, you can see the ball filters over there. Now, the reason the color ones are at the bottom is just because of the way they did the actual thing involving two GPUs and the way they split it up. If you look at the top row, you can see how it's picked up all these very little edge detectors of various sizes and orientations. That's the first layer. Xyla and Fergus took it a little further and they figured out a way of visualizing the kernels, how they respond to the second layer. So you can see you've got kernels there that respond to various slightly more complex features, things like squares and curved texture, little sort of eye-like features or circular features. And then further up on about layer three, you get somewhat more complex features still, where you've got things that represent recognized parts of simple parts of objects. Okay, so this gives you an idea of roughly how the convolutional networks fit together. The operators sort of feature detectors where each layer builds on the previous one, picking up ever more complex features. Okay, now I'll move on to Lasagna. If you want to specify your network using the mathematical expressions using theano, it's really powerful but it's quite low level. If you have to write out your neural network as mathematical expressions and numpy expressions each time, it could get a bit painful. Lasagna builds on top of it and it makes it nicer to build networks using theano. And its API rather than allowing you to specify mathematical expressions, you can construct layers of the network. But you can also then get the expressions for the output for its output or loss. It's quite a thin layer on top of theano, so it's worth understanding the theano. But the cool thing about it is if you have one of these mathematical expression compilers, if you want to come up with some crazy new loss function or do something new and crazy, whatever it is you like or you want to be sort of inventive, you can just go right out the maths and let theano take care of figuring out how to run that using end video's QDo. So you don't have to worry about it yourself. It's quite easy to get going. You just do it all in Python and it all just works great. So that's why I happen to like it. And once again, slides are available if you want to go and dive in more detail. Okay. As for how to build and train neural networks, I think we'll start out with a bit about the architecture. If you want a neural network, if you want to get a nice neural network that's going to work, I'm going to try and give you some rough ideas of what kind of layers you want to use and where in order to get something that's going to give you good results. So your early part of the network is going to be just after your input layer is going to be blocks that are going to consist of some number of convolutional layers, two, three, four convolutional layers followed by a max pooling layer that effectively down samples. Or alternatively, you could also use striding as well. And then you have another block the same. And you'll note that the notation is that's quite common in the academic literature is you specify the number of filters, the number of kernels, and then the three specifies the size. So often you use quite small filters, only three by three kernels. MP2 means max pooling down samples flat to two. And notes that after we've done the down sampling, you double the number of filters in the convolutional layers. And then finally at the end, after your blocks are convolutional on the max pooling layers, you're going to have the fully connected or also the notation as dense layers where you'll typically, if you've got quite a large resolution coming out of there, you'll want to work out what the sort of dimensionality is at that point and then roughly maintain that or reduce it perhaps a bit in your fully connected layer. You could have two or three fully connected layers if you like. And then finally you've got your output. And there's the notation for fully connected layers. Does that just mean 256 channels? Okay, so overall, as discussed previously, your convolutional layers are going to detect features in the various locations throughout the image. Your fully connected layers are going to pull all that information together and finally produce the output. There are also some architectures. You could look at the inception networks by Google or Resnet by Microsoft for inspiration if you want to go and have a look at what some other people have been up to. Go on to slightly more complex topics. Batch normalization. It's recommended in most cases it makes things better. It's necessary for deep networks. By the way, I should tell you deep learning neural networks, a deep neural network is simply a network of roughly more than four layers. That's all it is. That's what makes them deep. And so if you want particularly deep networks of more than eight layers, you'll want batch normalization, otherwise they just won't train very well. They can also speed up training because your cost drops faster per epoch. Although it can take more, each one can take it longer to run. You can reach lower error rates as well. The reason why it's good is sometimes you've got to think about the magnitude of the numbers. You might start out with the numbers of a certain magnitude in your input layer, but that magnitude might be increased or decreased by multiplying by the weights to get to the next layer. And if you stack a lot of layers on top of each other, you can find that the magnitude of your value is either exponentially increases or exponentially shrinks towards zero. Either one of those is bad. It screws the training out completely. Batch normalization, it standardizes it by dividing by the standard deviation, subtracting the mean after each layer. So you want to insert it into your convolution of fully connected layers after the matrix multiplication, but before adding the bias and before nonlinearity. But the nice thing is, lasagna with a single call does that for you, so you don't have to do too much surgery yourself on the neural network. Dropout. It's pretty much necessary for training. You don't use it at train time, but you don't use it at prediction and test time when you actually want to run the sample through the network to see what its output is. It reduces what's known as overfitting. Overfitting is a particularly horrific problem in machine learning. It's going to bite you all the time in machine learning. It's what you get when you train your model on your training data. It's very, very good at the samples that are in your initial training set, but when you want to show a new example that's never seen before, it just dies. It fails completely. So essentially what it means is it gets particularly good at those examples. It picks out features of those particular training samples and fails to generalize. So dropout combats this. What you're going to do is you're going to randomly choose units in a layer, and you're going to multiply a random subset of them by zero, usually about around half of them. And you're going to keep the magnitude of the output the same by scaling it up by a factor of two. And then during test predict you just run as normal with the dropout turned off. You're going to apply it after the fully connected layers. Normally you can do it after the convolutional layers as well, but the fully connected layers towards the end is normally where you apply it. That's how you do it in lasagna. And to show you what it actually does, this is with your dropout turned off, so you see all the outputs going through. Those little diamonds represent our dropout. So we take half of them, we pick them and turn them off, and you see the gray weight lines. What that effectively means is when doing training, the back propagation won't affect those weights because the dropout kills them off. And then the next time around you turn off a different subset of them and furthermore. And the reason it works is it causes the units to learn a more robust set of features rather than learning to co-adapt and develop features that are a bit too specific to those units. So that's roughly how it sort of combats overfitting. Dataset augmentation. Because train neural networks is notoriously data hungry, you want to reduce overfitting and you need to enlarge your training set. And you can do that by artificially modifying your existing training set by taking a sample and modifying it somehow and adding that modified version to the training set. So for images, you're just going to take the image, you're going to shift it over by a certain amount or up and down by a bit. You're going to rotate it a bit, you're going to scale a little bit, horizontally flip it. Be careful of that one. So for example, if you've got images of people and you vertically flip them so they're upside down, that will just screw up your training set. So when you're doing dataset augmentation, you've got to think about what you need from your dataset and what it should output and think about whether your transformations are a good idea. Okay and finally, data standardization. Neural networks train more effectively when your dataset has a mean of zero, all the values are a mean of zero and unit variance or standard deviation of one. And also with regression, you want to standardize your input data. And with regression, you want to standardize the output. Remember that in regression, we are quantifying something, so we're producing real valued outputs. You want to make sure that's standardized as well. I've personally found that, I mean, bitten one, I haven't done that. But when you use your network, when you deploy, don't forget to do the reverse of the standardization to get it back into the space, back into the sort of scale and range that you want it to be in the first place. And to do that, standardization, you extract all the samples into an array. And in the case of images, you're just going to go through all the images and extract all the pixels and splat them out into a big long array, keep all the RGB channels separate, and you're going to compute the mean and standard deviation in red, green and blue. And you're going to zero the mean by subtracting it and divide by the standard deviation, and that's standardization. Okay. When training goes wrong, as it often will. You'll notice what you want to do is, as you train, you want to get an idea of what the value of your loss function is. When it goes crazy and starts heading towards 10 to the 10 and eventually goes nan, everything's gone to hell. So you got to track your loss as you train your network so you can watch for this. Okay. If you have the error rate equivalent of a random guess, like it's just throwing a dial, throwing a coin, it's not damn learning anything. And essentially, it's learning to predict a constant value a lot of the time. Sometimes it isn't enough data for it to pick up the patterns. You can also learn to predict a constant value. Let's say, for instance, that you have a data set where you say you've gotten divided into, say, 10 classes, but say the last class only has about 0.5% of the examples. Now, one of the best ways, the sneaky, horrid little neural network, to figure out a way to cheat you is to simply say that it's to simply never predict that last class because it's only going to be wrong in 0.5% of the cases. And that's actually a pretty good way of getting the loss down to a pretty low value by concentrating on all the other classes and getting those right. And the problem is it's a local minimum. It's a local minimum. You can think of it as a local minimum of your cost function. And neural networks get stuck in those a lot. And it will be the bane of your existence. Most often don't learn what you expect them to or what you want them to. You'll look at it and think, as a human, I know the result is this. And the neural network will learn to pick up features and detect something quite different. So, yeah, look at the bane of your existence. I'm going to illustrate this with a really nice, cool example that is available online. I'm going to talk about how you design a computer vision pipeline using neural networks. With a simple problem like handwritten digits, you could just throw a neural network, one neural network, and it will do it. Great. Wonderful. But for some more complex problems, they're often just not enough. And neural networks are not a silver bullet. So, please don't believe all hype that's around deep learning right now. It's theoretically possible to use a single neural network for a complex problem if you have enough training data, which is often an impractical amount. So, for more complex problems, you've got to break the problem down to smaller steps. And I'm going to talk a bit about Felix Laos' second place solution to the Kaggle competition on identifying right whales. So, his first naive solution was to train a classifier to identify individuals. So, I'm going to pull up his website and... Okay, cool. Okay. So, effectively, these patterns on the head of the whale is what you use to identify an individual. And the challenge is to pick out, figure out, give an image of a whale, figure out which individual it is. And this is the kind of image you get in the training set. You've got the ocean surrounding a little whale as he breaches as he pokes his head over the surface. And you've got to figure out who he is from that picture. So, Felix's first solution was just to stick that through a classifier and see what happens. So, let me scroll and find out. Okay. Baseline naive approach. Here we go. And what he found out is that it gave no better than random chance. So, what he then did is he used what's called saliency detection, where he used a trick to figure out which parts of the image are influencing the network's output the most. And he found out that actually bits of the ocean were affecting it. Why would it do that? Okay. Try thought experiment. I want you to imagine that I give you this problem. You've got a bunch of images of right whales and I say that's number one, that's number seven, that's number 13. But you've also been given really, really horrendous horrible amnesia that has completely warped your mind of the concept of what a whale is, what the ocean is, just about every human concept you have. So, you are literally starting out with images. There's zero knowledge at all. No semantic knowledge about the problem. You can't even guess what it is. You're just given images and numbers and then told from this training set, figure out what these are. What are you going to, how are you going to make that decision? Is it the ocean? Is it the whale? What part of the image is actually helping you make that decision? And when you think about it from the perspective of a neural network, that's where every neural network is starting out from. It's starting out from zero knowledge. And that's why the initial solution didn't work very well. You could do if you had a billion images with all the ground truths and the whale biologists, the marine biologists have gone, you know, hand hand classified a billion images of them and put in enormous amounts of human effort because then the signal will eventually come through the noise. But we can't practically do that in real life. So, his solution, I mentioned the region based saliency, so found out that he had locked onto the wrong features. So, he trained what's called a localizer. Now, I've told you about classifiers and regressors. Localizers, what they do is they look at image and they find my target point of interest is over there in the image. And so, what he did is he found, he got the localizer to take that image of the whale and found out that the head is there. And after that, he ran it through the classifiers. The idea is he first gets trains and network to look for whale, pick it up, crop it out from the image and then just work on that piece. And then furthermore, he trained a key point finder. Whale had a line, here we go, he trained a key point finder to find the front of the head and the back of the head. So, he could then take the image of the whale and rotate it so that they're all in the same orientation and position. And after that, having got sort of really uniform images of whales, he could then run it through the classifier. And eventually, that trained the classifier on orange and crop whale head images got him second place in the Kaggle competition. So, I think that's kind of a nice illustration of how you've got to be careful how you use these things. All right, how am I doing for time? Great. Okay. Might even have a bit of extra to go through if you have extra things. Never know. So, Oxford Net VGG net and transfer learning. Using a pre-trained network is often a good idea. The Oxford Net VGG 19 is a 19-layer neural network. It was trained on that big million-image dataset called ImageNet. And the great thing is they have generously made the neural network weights file available under Creative Commons license with CC attribution. And you can get it there. There's also a Python-pickled version that you can grab hold of as well. They're very simple and effective models. They consist of three-by-three convolutions, max pooling and fully connected layers. That's the architecture. And if you want to classify an image with VGG 19, I'll show you an IPython notebook that will do that. All right. So, we're going to take an image to classify, which is our little peacock here. We load in our network. Oh, sorry. Beggy, pardon. Cool. So, we've got a little peacock where we're going to classify. We're going to load in our pre-trained network. I think I better skip over the code a bit. This is going to be a bit too dull. But effectively, you can go through the notebook yourself. It's on the GitHub. I hope you don't mind if I spin through this quite quickly. So, we're just going through a bit about what the model is like. Okay. So, this is where we actually build our architecture. So, you can see the input layer. We've got our convolutional layers, max pooling. This is all the lasagna API. We'll skip all this. We'll go down. Finally, we've got our output, which has got softmax nonlinearity. There you go. Build it. We're going to drop all our parameters in. Beggy, pardon. Okay. Sorry. This is originally from my tutorial. So, anyway, finally, we show our image that we're going to classify. We predict our probabilities here. We notice the output is a vector of 1,000 probabilities. We find out that the predicted class is 84 with probability 98.9%, which is a peacock. You can run that yourself and you can find out the little work. So, the cool thing is, you can take the pre-trained network and just use it yourself. Transfer learning is a cool trick. This is the last trick I want to show you. Training in your network from scratch is notoriously data hungry. The reason is you need a ton of training data. Preparing all that to time can seem inexpensive. What if we don't have enough training data to get good results? We don't have money to prepare that prepare it. The ImageNet dataset is really huge. Millions of images with ground truths. What if we could somehow use it, what if we could somehow use the ImageNet dataset with all this vast data to help us with a different task? The good news is we can. The trick is this. Rather than trying to reuse the data, you train a neural network like VGG19 or download VGG19. You are going to take part of that network and retain it, throw away the end part of it, and stick some new stuff on the end that will output what we want. That way, you effectively train just the bit that you have added and then fine tune it at the end. Essentially, what you can do is reuse part of VGG19 to say classify images that weren't in ImageNet and for classes and different kinds of object category that weren't mentioned in ImageNet. You can reuse it for localization. You want to find the location of an object, the location of that whalehead, maybe, or segmentation where you want to find the exact outline of the boundary. To do transfer learning, what we do is we are going to take VGG19 that looks like that. Those are all our layers. We are going to chop off those last three. The stuff on the left gets hidden so we can show some text. We chop off those last three layers and then we create our new ones randomly initialized on the end. Then what we do is you train the network with only your training data, but you are only going to learn the parameters, you are only going to train the parameters on the new layers that you have created. Then you fine tune it where you train parameters on all the layers having trained those initial new ones. You then fine tune the whole lot. You just do training this time updating the parameters of all the layers and this will get you some better accuracy. The result is a nice shiny new network with good performance on your particular target domain that is going to be somewhat better than you could get with starting out with your own data set. Finally, some cool work in the field that might be of interest to you. Xyla, I think I mentioned this briefly already, but they have visualized and understand, they envision the understanding convolutional networks. They decide to visualize the responses of the convolutional layers to various inputs. You see in these images where they decide to visualize what is going on. If you want to find out what your network is picking up, this is a good place to look for how to work out what your network is detecting. These guys decide to figure out if they can fool a neural network. They decide to generate images that are unrecognizable to human eyes but recognized by the network. For instance, the neural network has a high degree of high confidence that that is in fact a robin. It looks like horrible noise, but it thinks that is a cheetah, that is an armadillo, that is a peacock. They then went on to say how can we generate images that make sense to a human? That is a king penguin. That is a starfish. You can see where it is picking things up. It is looking for texture but not for the structure of the object. It is picking up certain things and ignoring other quite important features. You can run neural networks in reverse. You can get to generate images as well as classify them. These guys decide to make them generate chairs. They give the orientation, the design, the color and the parameters of the chair and they try to generate an image. They end up with these chairs and they are even able to morph them. This one got a lot of press, neural algorithm, artistic style. If you have got the Prisma app, you will know what that is all about on iPhone. They took Oxford net and they extract texture features from one image and they apply them to the other. You take that photo of say this water front and you take a painting like, say, starring up by Van Gogh and it repaints the image in the style of Van Gogh or in the style of the screen or any of these others. It is very, very cool. The nice thing is there are iPhone apps that do this now. What these guys did is this is a bit of a masterpiece of work. These images of bedrooms are generated by a neural network and the way they did it is they trained two neural networks. One to be a master forger and the other to be the detective. The master forger tries to generate an image and the detective tries to tell us is that a real bedroom or is that one generated by the forger? The idea is you co-adapt them to get them both better so that the master forger gets better and better until it generates pictures like that. Which is kind of cool. They even took it further by figuring out what the sort of by combining some of the parameters and how they did. If you have seen some of the results from the sort of a king minus man plus woman equals queen stuff that has been done on some of the word to that. Thank you. The word to that models similar things with facial expressions as well. Anyway, I hope you found this helpful. I hope it has been good. You have been a great audience. Thank you very much. We have about nine minutes now for the questions. It was a great talk. Thank you. I have actually several questions. The first one, when you are modeling a neural network, how do you choose or is there a way to choose how many hidden layers and neutrons are there in them? Because I know that was an issue for me when I was modeling some. I am not aware of any particular rule of thumb to choose how to design your network architecture. The rule of thumb I use to look at things that have worked for other people and build off that. The Oxford architecture where you have got the small convolutional kernels. People find the small convolutional kernels work well. A few of those layers follow by max, pulling or striding. I think there are some people who have probably tried things like grid search where you try to get to automatically alter the architecture. Given the fact that for something like an image net model your training time can extend into weeks or hours at least on really big GPUs that can be impractical. So I am afraid to say it is just rule of thumb. Just try it out and see it works. I would look up the literature and see what other people have done and just adapt it. I am sorry, I can't give you more information. My second question. We saw that your guys are analyzing images and numbers. Is there a way that you can make strings input and recognize patterns in them? How would you do that? Would you have to transform them somehow? For text processing. I think what people tend to do is they tend to use something like word to vector to convert each word into an embedding which is like a 200 or 600 element vector. Then you use what is called a recurrent neural network where rather than just having it go through the output it goes partially through and feeds back into an earlier layer. So then it has an idea of time. I have not implemented those models. I am afraid I am outside my comfort zone in terms of being able to advise you. But look at recurrent neural networks. But they tend to use the word embeddings. They tend to use the word embeddings to convert the words into a vector. The trivial way of doing that is to turn it into one hot representation where if you have 2,000 words in your vocabulary you have a one to represent a word with a vector of all zeros except one for the particular word it is. Given the sparsity of that input that often causes problems which is why they use the embeddings. And the last question. Could you train a neural network to do like math, like addition, maybe multiplication and if you can, would it be maybe faster than the usual way that the processors are doing it? You can train to do addition. I think actually there are some people who have managed to take the image in that data set where you take two handwritten digits in an image. It figures out what they are and then is trained to produce the sum. It can work. Multiplication they don't do. Actually people haven't figured out how to get a neural network to do that. So the models actually can't extend to certain things which is interesting. So there are certain things they just don't do very well. So I think it is quite limited but as for would it be faster, no way it would be faster now because you are using a hell of a lot of mathematical operations just to do something that is a one instruction operation in the process. Sure. Thanks for the talk. Really, really interesting and great stuff at the end around the images. What are your thoughts on how neural networks could be applied to text analytics because most people don't do that? Text analytics. It is outside my area so I don't know. I would speak to Catherine Jamal. She is here and she did a very, very good talk describing the sort of, she gave a really, really good intro and a really, really good overview of what the text processing world is like and she gave quite a few neural networks. Neural networks are some of the best models for it now but it is outside my area of expertise but she knows our stuff on that so I would speak to Catherine Jamal. Any other question? Hello. The name of neural networks comes from the science of the brain. Do you know if it is used widely in brain science? Not sure. I think that the model that we use for our neural networks that I have been talking about here is quite different from my neurons in the brain work. My basic layman's understanding of brain neurons is they operate on spike rate. They generate output spikes and it is the frequency of that that is roughly the strength of their output. I think I don't know. I think that trying to liken these to one another is, I don't think that they are that much alike. I think that where the similarity is is that people looked at how neurons in the brain are hooked up to each other and they said how can we make something from the models this. But what we have got is something that seems to work well given our processes and seems to produce very good pattern recognition. But as for similarities to the brain beyond that, I don't feel comfortable saying any more. Any other questions? Hi. Have you heard of the self-driving car using deep learning to implement how they drive it? I wonder how they would update the cost function because it is a stream of video rather than a fixed static output. I've heard about it. I'm not sure how the hell they are doing it. I don't know. I suppose if you were to try and do something like that, one of the things you could do is you could prepare a bunch of footage where you say the human who is driving this car has done well as they haven't crashed it or killed anyone or done something like that. The idea is that all that is good. Maybe if there are some footage of some accident they say that is bad. Don't do that. What they do is what you probably want to do is we want to say, given this video, have these outputs as in a steer like this, accelerate and brake like this, produce these decisions. So that's actually a little bit like the Atari game playing neural networks that Google developed, the stuff where they got really good scores on the video games where they take the input, the screen, and they decide whether to move up, down, left, right, shoot. But it's similar thing where instead of deciding whether to move up, down, left, right, and shoot, you want your control on the steering wheel, the accelerator and the brakes, you could do it like that. But given my experience of it and given the fact that, as I mentioned, if you have particularly rare examples, rare situations where quite often the neural network will just cheat and not just because they might make up 0.001% of your training set, it will never actually bother to learn anything from those. It'll just, the cost function, it'll discover a local minima that ignores them. I would not be very comfortable getting into a car that was just controlled by a neural network. I would not want to put my life in the hands of a vehicle like that. But that's might be how you could build it. But whether it would, I don't think it would be very good. Any other questions? Hi. Do you ever combine neural networks with other techniques like approximation algorithms? Approximation algorithms. Yeah, like optimization techniques. I was thinking about travel salesman problem, for example. I don't know. I haven't tried them for that. I'm not aware. I wouldn't be surprised if someone tried it, but I'm not aware because I've not looked at it, I'm afraid. That's a difficult one. I'm afraid I don't know. I'm sorry. You'd have to figure out a way of kind of some kind of cost function that measures how good its solution is. How one would go about doing that in certain problems or not. I have time for one last question. Maybe it's kind of a technical question about the TANO. But when you apply dropout, does the expression get recompiled, reoptimized to be efficient, not to take account of that sort of those weights? Or they get actually, you know, the floating point operation get to the GPU or CPU, but they are zero. So they don't affect the gradient. I think it's the second because what you do is you get it to get its random number generator to generate either a zero or one, and then you multiply that multiply in the expression. So I think it's just, it's not actually optimizing. I think it'd be quite difficult to optimize because the problem is for every single sample, the mini batch actually blocking out a different subset of the units. I'm not even sure how one would actually go about optimizing in an efficient way because you've got to almost select which units you're dropping out and then sort of from that decide what operations you can save and you can't do it, going to do that on the fly and I think that'd be quite tough. So I'm guessing it, I would guess that it doesn't. So now, since there are no other questions, so I'll probably thank Jeff for his wonderful talk and probably I'll say yes, enjoy lunch.
Deep learning: how it works, how to train a deep neural network, the theory behind deep learning, recent developments and applications. ----- (length: 60 mins) In the last few years, deep neural networks have been used to generate state of the art results in image classification, segmentation and object detection. They have also successfully been used for speech recognition and textual analysis. In this talk, I will give an introduction to deep neural networks. I will cover how they work, how they are trained, and a little bit on how to get going. I will briefly discuss some of the recent exciting and amusing applications of deep learning. The talk will primarily focus on image processing. If you completely new to deep learning, please attend T. Rashid's talk 'A Gentle Introduction to Neural Networks (with Python)'. His talk is in the same room immediately before mine and his material is really good and will give you a good grounding in what I will present to you.
10.5446/21147 (DOI)
We're going to get started with Helen and managing mocks. Please give her a big round of applause. Hi, everybody. My name is Helen. I'm a freelance programmer. I've been using Python for quite a few years. This is my first year at Python. I'm really excited to be speaking here and I'm really enjoying Bill Bowell and having a great week. So I decided to write this talk. I originally wrote it for Python UK last year. I did that not because I felt like I was an expert on mocking, but because it was something that I'd tried working with quite a lot and I'd struggled with and felt like I was sort of hacking my way around and not really taking the time to understand it properly. Then I started a new project and it had lots of API interactions and databases that I didn't own and things that were quite a good case for mocking. And so I wanted to start doing things nicely and write lots of tests and try and understand this properly. So it's something that I've struggled with and learned a lot about and I wanted to share a few of the things I've learned. So what is a mock? A mock is a fake object that replaces a real object and the main use for this is in unit testing. Mocks are a way to control and isolate the environment in which your test runs. So why would you want this? Well, first of all, you want your tests to be deterministic. If they are failing, it should be because there's something wrong with your code, not because of some external factor like a server being down. You need to be able to trust your tests. If you don't trust them, you'll start ignoring them. Mocking gives you more control over the inputs to your test. You can simulate different environments for your code and make sure it responds correctly to all kinds of scenarios. It's also fast. It's an in-memory operation, so it's generally quite a lot faster than the things it's replacing, like network calls and file systems. And the speed of your tests is really important. Again, if that's too slow, you'll start running them. So I'm mostly going to be talking about the mock library that comes with Python. This is called unittest.mark. Don't worry about the unit test name. You can use it with any framework. It works quite well with PyTest and other things. So it used to be a standalone library. It was written by Michael Ford, and it was brought into Python in 3.3. If you're using earlier versions, you can still use the standalone version, and that's actually maintained and kept up to date. So Python 3.3 plus, just do unittest.mark. Otherwise, pip install mark and then import mark. So we're going to start with a look at mock objects. You'll find yourself working with these quite a lot. So you import mock with a capital M, and that's the mock class. You set that up with whatever values you want it to have. So here I'm setting a value, and then I can get that back as a property of the mock. If you try to use a property that you haven't set, that's another mock. If you try to use one of its properties, that's also another mock. So it's mocks all the way down, and you can kind of dig down and set the values of these mocks and get them back. Mocks are also callable, and when you call a mock, it returns another mock. So we've got more mocks. If you want to change what your mock returns, then you can set its return value, and then when you call it, you'll get back the value that you set. So you can dig down as far as you want into this structure. You can just go as deep as you want, and this is all about setting up the right environment for your code to pick up on. Another way that you can set up what your mock does when it's called is side effects. So a side effect can be an exception. So if you assign an exception class to side effect on a mock, then when that mock is called, you'll get that exception raised. It can also be a function, so you can override the behavior of that. I don't find I use this a lot, but it is terribly needed. And a side effect can also be a list of things you want to happen. Now this is useful if your code is going to be making multiple calls to this mock, and you want a different thing to happen each time. So I can set up side effect with a value, an exception, and then another value. So the first time I call it, I get the value. Second time, it will raise the exception. Third time, I get another value. And if I try to call it again, I'll get a stop iteration because I ran out of side effects. And if you want it to go on forever, it's just an iterable, so you can have a cycle or something. So while you're doing things to mocks, they are recording everything that's happening to them. They record all the calls that are made and all the arguments that are passed in. And you can get that information, you can query that information using these assertion calls. And most of them want you to say which arguments you expected them to be called with. So we've got a cert called with, which looks at the last call that was made. It doesn't care what happened before that. You've got a cert called once with, which is the same, but it makes sure that it was only called once. You've got a cert any call, which just looks at whether that call was ever made with those arguments, doesn't care about the others. You've got a cert has calls, which you can give it a list of calls, and you can specify whether you care about the order. Then you've got a cert not called, which just you check that it wasn't called. You need to be a little bit careful with that last one because, so if you do this in Python 3.5, you create a mock, you call it, and then you assert that it wasn't called, you get an assertion error, and that's kind of what we might expect to happen. You do the same thing in Python 3.4. You create the mock, you call it, you assert not called. All that happens is that you get back another mock, and if that's in a test, it will just run and it won't fail. The reason for that is that a cert not called didn't exist before Python 3.5, so when we try to use it, it just behaves like anything that we might try to access on the mock. That just passes, and that's maybe a little bit dangerous. If you do plan to use that call, make sure you know which versions of Python people are going to be developing with, and maybe just don't if you can't. So that's a little bit dangerous, and it has caught me out a couple of times, probably because I wasn't doing Red Green TDD properly, which is bad. Then 3.5 thankfully does something about this. So we get these safety checks. If you try to call something beginning with assert or a threat, because you might make typos, it will complain. So we've got assert call, which is being added in Python 3.6. We try to use that in Python 3.5. We get an attribute error, and that's kind of good because it's telling us that something's wrong with our code. And if you really have things that are beginning with assert, and you want to use them, then there's a flag called unsafe you can use to switch that off. When you're using these assertion calls, you generally have to say which arguments you want to be passed in, you're expecting to be passed in, and they're generally fairly fancy about that. But sometimes you might not care about what one of the arguments was. Maybe your argument is particularly complex, so you're testing them all in separate tests. So you can use mock.any for that, and you can just pass that in, pass it in as in place of an argument, and that just says I don't care what the value of this argument is. If you want even more control there, you can use comparison objects. So these aren't really a special mock thing, they're just a nice Python thing with magic methods, which you might have been to other talks about this week. If you have an object, an instance of a class that implements the EQ magic method, then you can implement a custom comparison and pass that in as a check in your assertion checks. So for example here, I'm checking whether my function was called with a multiple of five, and it was, and then with a multiple of four, which it wasn't, and I've got a nice performative string to describe what went wrong as well. You can have even more control over the inspection of your calls. Maybe you want to work at a lower level. You can say, was this mock called, how many times was it called, and get access, raw access to all the arguments that it's called with. So we've looked at a lot of features of mock objects. Let's have a look at how they fit into your tests. As a general kind of pattern, we'll be creating a mock and making sure it's in the right place, then we'll be setting up the values on it, the environment for your test, then we run the code under test, and then we check that our expectations have been met. So first of all, let's look at how we get a mock into place. This we use patch. So this tells Python where to put a mock. You give it a path to a module or a class or a function or anything that can be looked up as a path. And it will set things up so that when that path is looked up, it will give you back, rather than giving you back the real thing, it will give you back a mock. And you get access to that mock in your test. So you can manipulate that mock and kind of get that injected into your code via patch. Now, the object it actually gives you is a magic mock, which is like a mock object, but it has some of the magic methods set up for convenience. But we don't need to worry too much about that. So there are a few different ways you can access patch, depending on what your situation is. First of all, we have a decorator. So I've got a little example function, which I'm going to be testing. It uses requests to contact the GitHub API. It fetches a JSON document for a user and extracts that user's number of followers and returns it. So it just looks like that when we run it. So we patch our test method using requests.get. We get a mock object passed into our test method as a parameter. We can then manipulate the values of the mock. So we chain all the way down through return values and objects and more return values and set up what we're going to pretend is being returned from this API. And then we can run our assertion and check that the right value came back. You can also move it up to the top of your test class if you've got one and have it as a class decorator. If all your test methods are patching the same thing, then that's quite handy. You can stack multiple patches. You need to be careful about the order in which you do that. It kind of works from the inside out. So the bottom most patch corresponds to the first parameter and so on. Patch kernels to be a context manager. So we get a mock into our context and that survives, the patch is active for the lifetime of the context. So this gives you more control over the lifetime of your mock if that's what you need. If you have any kind of clashes with other stuff that does clever things with parameters like pytest or other such things, that it can be quite handy there. I tend to use the decorator version when I can because I don't like long lines of code and these make your code longer. Okay, so you can have even more control with, by calling start and stop on the object returned by patch. I don't find that to use too much but it can be useful. One thing I particularly struggled with in mocking was patch paths and how the lookups work and what path to use when I'm patching. I often found that mocks weren't appearing where I was expecting them to and I was getting very confused so I just want to look at how this works. So I've got my getFORWords function again and it's in a module called githubutils and I've got a test for that. So the test imports githubutils. Githubutils imports requests and pulls it into its own namespace. Coming back to our test, we patch requests.get, we get our mock, we manipulate it and then we call getFollowers. GetFollowers looks up requests.get and because we patch that path, it will get back the mock so our test behaves as we might expect it to and it passes. If you consider a slightly different example where I've done a from request import get at the module level rather than looking up inside the function, the test is almost exactly the same apart from the import. So if I import githubutils2, that imports get from requests, then our test patches requests.get, it gets the mock, manipulates it, calls getFollowers.getFollowers called get. Get's already been looked up on requests and it was looked up before we patched. So what's going to happen is we're going to get a slight delay and then we get something that doesn't match the value we set up. And that's because it was talking to the network because we patched. You might say we patched too soon. A better answer to that is that we patched the wrong thing. And this is how we fix that. We patch the thing that our module has already looked up. So we patch it on the module itself because it owns a thing called get. So our test looks like this and that works. The Python docs say ensure you patch the name used by the system under test. Another useful thing is patch.object. You can use this to attach a mock to another part of an object that you're already testing. For example, here I've got a simple greet function on my user class that says happy birthday to the user if it's their birthday. I'm testing that function and I want to mock his birthday. So I get a user object. I patch it by passing in the object itself, the name of the method I want to patch, and the return value I want it to have. And when I get inside that context, the user.isBirthday is a magic mock that will return true. So that will make my test work. Mocking the time can be quite tricky. The date module is written in C, so you can't mock individual bits of it. If you try, you'll probably get something like this. You can mock the whole thing, but it does mean you've got to kind of chain all the way down to the bit you want and you don't get a choice at whether you're mocking the other bits. So using the birthday-based method. So I patch that. I modify the return value of today and that works. A possibly nicer way of doing that is the freeze-gun library, which is pretty cool. You pip install freeze-gun and it comes with a decorator called freeze-time. That takes all sorts of interesting human-friendly formats that you can put in. You can have dates and date times and so on. And it's pretty straightforward to use. I like that library. Mocking the file system is also quite tricky. There's a utility method called mock open, which helps you out here. You give it a parameter called readData, which you can tell it what data you want to pretend your file has, and it will set up a special mock that has all the right sort of file handled stuff. Because we're using open, which is a built-in, we need to do something slightly different when we patch that. We need to use create equals true because we can't overwrite the built-in. We need to kind of create a local copy of open that our code is going to pick up on. Then once we've got that, we can open and read our file. There's one little problem with this. You can't iterate over that file handle. Your file handling code probably has a nice, pythonic interface like this. If you want that, then you've got to modify the mock that you get back from mock open and chain all the way down to the itter magic method and set up the lines that you want. But if you're doing that in the context manager, you need to go through the enter magic method as well. If you want to mock a property, so we've got our person class again. I've made his birthday a property now. If you try to patch the object, that doesn't work. You get this error not entirely sure why. I guess it's something to do with the way that properties work. You need special handling for that. For that, there's a parameter called new callable, which lets you say what type of mock you want to create. That's a special mock class called property mock set up just for this purpose. The other thing there is you need to patch the class rather than the object, which might be a little bit limiting. But as far as I know, that's the only way to do that. I've just got a little example of some mock-based test. I've got this very crude retry function, you pass it another function, and it tries over and over to call it until it succeeds. If you get a database error, it will wait a little bit and then try again, and it will keep doubling that delay. When we're thinking about testing this, I can see two big reasons for mocking. First of all, we've got this time.sleep, which if it's in our test, then it's going to slow our test suite down. Secondly, we want to be able to simulate database errors. We don't want to cause real database errors. This is a test that I've written for this function. We patch time.sleep, and that gets rid of the delays for us. We get a mock object passed in as our parameter. We create another mock inside the function. This is going to be the function that's going to be run each time. We set it up with three side effects, two failures followed by a success. We call our retry function, and then we have a look at what happened. First, we check that it was called three times because we had two failures and then a success. Then mock.sleep assert has calls. We're just checking the calls, the delays that were made, and making sure that it was doubled each time. Then we check that the result that came back was what we set up as a return value for that function. That's just a little example for you. I'm nearly out of time, so I had some little bit of stuff about when you mock and why you should mock. I'm going to post these slides online afterwards and also got some recommended reading for you. The testing notebook by Harry Percival is very good and talks quite a lot about why you should mock and the advantages of mock and how it can help drive better design. Harry Bernhardt, fast test, slow test is a very good talk about the same sort of things. It's worth watching. The unit test on mock library, I couldn't possibly cover every feature of mock in this time that I've tried, but it's a great read. I'm Helen Estee on Twitter and GitHub, and there's also a GitHub repository of the iPython notebooks with this material. The slides are quite dense, so there's some sort of interactive examples in there. Thank you. Thanks very much. Please thank the speaker.
Helen Sherwood-Taylor - Managing Mocks Mocking is a valuable technique for writing tests but mocking effectively is often a stumbling block for many developers and can raise questions about its overall value as a technique. There will be a brief introduction to mocking, then a look at features and techniques of Python’s unittest.mock library and cover some useful tips and common scenarios, so this will be useful to those who have some experience mocking but would like to do so more effectively. ----- Mocking is a valuable technique for writing tests but mocking effectively is often a stumbling block for many developers and can raise questions about its overall value as a technique. The audience will have some familiarity with unit testing and may have tried mocking before, but some introduction will be provided for those who haven’t. We will look at some features and techniques of Python’s unittest.mock library and cover some useful tips and common scenarios, so this will be useful to those who have some experience mocking but would like to do so more effectively. Summary of proposed content: 1. A short introduction to what mocking is and why it is useful. 2. Tour of Python’s mock library and how to make the most of it * Creating and manipulating Mock objects * Setting up return values and side effects to control test environment * Inspecting mocks - different ways to examine a mock object and find out what happened during the test * How and where to patch 3. Common mocking situations - scenarios where mocking is particularly useful and/or tricky to get right. For example - date/time, filesystem, read only properties 4. Some discussion of when mocking is and isn't helpful. Focus will be mainly on Python's unittest.mock module but we will also have a brief look at some other useful libraries.
10.5446/21154 (DOI)
Good morning everyone, welcome to Bariatou. To listen to Ivan Gulenko, I will talk about how to make IT recruiting succulents. We'll have a talk of about 25, 30 minutes and then we'll be able to share, to ask questions and have a discussion about what we have heard. So please. Cool, so hi guys. The goal of this talk is essentially to give some of you who are hiring managers a chance to get inspiration how to improve IT recruiting in your company and for you who are more candidates, maybe job seekers, how to tailor your profile in a way that companies will not overlook you. So let's see. So I'm an engineer and people ask me why would you do recruiting? Why do you do this job that is often done by people who are non-technical, who just match keywords? And well, the reason is that there's many things to do just because of that. And there are a couple of companies that are doing already a very good job higher than Honeypot are trying to reverse the process. So to make companies apply for engineers, there is Starfighter IO. Essentially they are teaching engineers how the stock market works and how to write software in this space. It's done by Patrick McKinsey who is sort of famous on Hack and Use. Also there is Interviewing IO by Eileen Lerner who is a computer science MIT graduate that has been doing technical recruiting for a couple of years in the Bay Area and now she has a platform where Bay Area companies are matched with engineers and it's all about code. So they have sort of a coder pad and they do algorithmic challenges on data structure and they've even went so far that they would anonymize the voice. So it's not possible to see if it's a woman or a man interviewing. Also there's Workshapile from London. They essentially had the idea, well, let's ask the engineers what you want to do in your next job and in this case, somebody said, okay, I wanna do front end and UI UX and the company says they look for it like UI UX person and then there is like a shape and it's matched like over each other so you can see clearly the interests of the engineer and the interest of the company. So this is how matching is done by Workshapile, very cool company. Then there is Triple Byte Silicon Valley based Y Combinator startup that recruits for other Y Combinator startups. So they essentially tell you, well, if you interview with us, you will skip all the steps and jump right to the final interview at Dropbox or Airbnb or whatever and this is definitely a super attractive offer for engineers because in my experience as a recruiter, I noticed that after five interviews, there's a interview fatigue so you're like tired. You don't want to interview like anymore. So this company gives the value add that you kind of skip steps, which is pretty cool. My motivation is that I believe that hiring is even more important in Europe because in Europe, it's less of a higher environmentality like in the US. So here, if you hire somebody, you really stick with the person and it's the most important aspect in my opinion to have cool coworkers because I mean, no matter how cool the project or product is that you work on, at some point, you'll be bored in a way and the only thing that in my experience kept me going as an engineer were the coworkers, my boss and the people I would hang, well, spend more time with than with my spouse and stuff like that. So that's like something to keep in mind and I would really urge that you get involved with the recruiting and hiring process in your company, no matter if you're a hiring manager or a candidate, don't let HR screw up the job ads and take part in the recruiting processes in your company. A Google recruiter once told me that at Google, A players hire A players or A plus players, whereas at normal companies, B players hire B minus or C players. So please, well, don't do that because I believe it's kind of true at some companies and I would really like to see that you invest a lot of effort to get your recruiting processes right. However, I would not recommend to copy the recruiting process of Google and ask to, well, spin, well, transfer red, black trees into binary trees where the prime numbers are divisible by three or something like that because the big companies, they can ask those questions because they have a revolving door of candidates because of the big brand. Anyway, engineers will come to them and they have like many, many resumes to look at. So they have kind of a prestige being such a big brand which your company probably does not have yet. But don't be disappointed because prestige is just fossilized inspiration. If you do anything well enough, you'll make it prestigious. But up until you are at this point, you have to make your processes in recruiting in a way that are attractive to engineers. So show what you have. You don't have a big brand name, but you maybe have a cool technology stack, great opportunity to contribute. So especially junior engineers will love if you tell them, well, we need an API that does this and that, please, in this internship in three months, you're gonna build this for us. This is, in my experience, the only way to get like Google quality interns in your no-brand company. You give them like a big chunk to work on. And reply fast to increase. So I just talked yesterday to an engineer and he said, well, he interviewed with like two companies and the second one answered like in two minutes and he stopped sending resumes to others. So as a hiring manager, I would really urge to do this as quick as possible to reply and get back to people. At this conference, I saw one especially good example, Binder, they have two posters and on the right poster, you see exactly the technology stack very clearly. And on the left, you see, do you have what it takes to influence our product? So they communicate, okay, you're gonna have cool technology stack and you can contribute. You don't see anything really about the product and what they're doing. Whereas on the other posters, if you go outside, you have lots of more marketing oriented material, which was clearly not designed for this conference where this poster is clearly recruiting material. So if you get into the effort to really look at what you want and which kind of engineers you think you want to attract, there is a list of programmer types done by triplebite.com. This is based on 10,000 matches that they have done. So they matched engineers with companies and then they looked at what are the differences and you get some kind of profiles that some of you might recognize yourself in. For instance, there is the academic programmer. So those are candidates who have spent most of their career in academia, programming is part of their master research. They have very high raw intellect and can use it to solve hard programming problems. So this kind of guys, I usually ask if they can explain what a git rebase is to see if they ever have collaborated with others and not just wrote scripts to, well, bang out some research thingy and then publish the paper. There's experienced rusty programmer. So those are candidates who have lots of experience and can talk in depth about different text, text, database, explaining their positive and negatives with detail. When programming during an interview, they're a bit rusty, they usually get to the right place but it takes a while. So this is a place where probably all of us will end up in because some of us are young programmers, some are older, but at one point we'll be there. And I see, especially at smaller and middle-sized companies, a problem to really have people who are over 50 to keep in the company and to give them an opportunity to continue growing. But I'll get back to that later. Trial and error programmer. So candidates who write code quickly and cleanly, their approach seems to involve a lot of trial and error, however. They dive straight into programming and seem a little at-hoc, but their speed enables them to ultimately solve the problems productively. So this might be early-stage companies that have no processes in place and they need people to bang out code really quickly. Practical programmer. Candidates solve practical tasks with even very abstract programs. They are uncomfortable with CS terminology and don't have a deep understanding how computers work. They're not comfortable with stuff like C. So this might be engineers who work for web agencies. Child prodigy programmer. Candidate is very young, 19 years old, and decided to go straight into work skipping college. They've been programming since a very young age and they're very impressive in their ability to solve hard problems. They've also been prolific with side projects and are mature for their age. Slightly they'll find a company in the future. Product programmer. Candidates perform very well on tech interviews and will have the respect of other engineers. They're not motivated by solving tech problems, however. They want to think about the product, talk to customers and have an input into how product decisions are made. So they're more oriented towards the customer and those UI, UX issues. Technical programmer. Candidates are the inverse of product programmers. They interview well and communicate clearly, but they aren't motivated to think about the UX or product decisions. They want to sink their teeth in hard technical problems. So the thing we saw with Binder is essentially something that communicates more to a technical programmer rather than a product oriented programmer. And those roles that we saw, well, Triple Byte in their very, very huge study, I mean 10,000 resumes is really a lot. They found out that product programmers, at least in Silicon Valley, they are companies like them most, whereas the academic programmer is kind of a bit ignored. I'm not sure if this applies completely to Europe because we are talking here about Bay Area startups and in Europe, the demand is a bit different. And I'm thinking if it's worth maybe to do a similar study in Europe with companies that are more, let's say, normal. So where to get engineers? So there are a couple of ways. So if you're a company that doesn't have a big brand name, I'll recommend to have, for instance, a blog about technologies or about the IT in your city. So I, for instance, when I moved to Zurich, the first thing I did, I wrote a blog post about eight reasons why I moved to Switzerland to work in IT. And this was almost two years ago. And I still got emails from people who are like, hey, I want to move to Zurich, introduce me to a great company. So this blog post was the reason why, well, I could start my recruiting company quite quickly. So if you're a company and wherever, I'd recommend to, well, invest into a sort of an online presence about technology. Of course, meetups and attending meetups with a t-shirt, with your company, or organizing meetups is even better. It's an obvious way to get people interested in your company and make this sourcing problem easier. So I mean, the problem most companies have is they don't have enough applicants. So you have to do as much as you can to solve this. Employee referrals are underrated, in my opinion. So companies should pay or incentivize somehow if somebody brings in great people. So this was what Google does. So if you join Google, the first thing they do is they force you to spam your classmates or stuff like that. At least, yeah. This is what I heard from some people. GitHub is an obvious way also to get engineers. So if you're using Flask a lot, then it would be great to just look at the people who contribute there and get them on board. This is a bit the problem of location. So you can do queries like this to solve it a little bit. So you can use the API to query for location. So in the get request, I'm not sure if you can see it from the back. It's like API GitHub search users, location, build bow, language, Python. And then you get the engineers who are like in build bow that do Python. So and then don't be evil. Don't spam them. Rather, look at their blog. And if you send a cold mail, try to make it personalized and interesting for them to read it. So as a site project, I did the result of a hackathon. But we build a web interface to actually do this. So you go like an old JavaScript, like build bow. And then you go, yeah, find engineers. And you click. And then you get what you saw before, but nicely displayed. And then you can look up what the people did before and stuff like that. So that's my site project. I call it like kitrickshrew.io. And I try to correct the combination between automation and manual work in recruiting. So right now I have lots of manual work in Zurich. And I start to do it in Munich. And we just build more and more tools in Python to, well, automate the scraping of the API and do cool stuff with that, again, without trying to be evil. So if you contact the people, it would be great to learn how to reach out. A great example of that, I just looked up on my hard drive and anonymized it quickly. So that's a very, very, very cool company. And they send cold mails. And this cold mail, it's not short. So it's rather long if you look at it. So it's like, so I'm the co-founder and engineer like you. So you try first line to get in touch and to, well, we are the same. So we're not a spammy LinkedIn recruiter who just, well, spams 10,000 people. But rather, you say, well, why I like you? So you try to make this part personalized. And here it's rather semi-personalized. I would say it's like, hey, I came across your website. And based on your experience, I think you could be a great fit for us. So that's a bit lame. When I do this mails, I try to really look at the blog. And if somebody has a blog entry about Lambda expressions in Java 1.8, I try to tell them, OK, look, this company also uses Java 1.8. And also, well, you care about this. And it would be maybe a great fit. So it's a bit less lame. The first line should be somehow meaningful. And then you tell what your company does and what the candidate can do for you. And this mail, this company really, the whole mail is actually pretty good. And they developed it with the CTO, CEO, and showed it to 30 people, A-B tested it. And then they made a student send out those emails to scale it, in a way. Yeah, that was pretty efficient, I believe. And then what you do. So you have the candidate, and you probably want to get to know them. So do a phone interview. So in a phone interview, the goal is to really find out if the candidate cannot do anything. So you ask questions that are like, everybody should be able to answer, like the famous FizzBus coding challenge, where you just print the numbers 1 to 100, and divide them by 3, divide them by 5. So easy stuff. To get rid of the people who obviously cannot do anything, this would be important and efficient. And then maybe you give a homework assignment, which shouldn't be like 10 days, but rather maybe two hours, or three hours. And this you can use to, at the on-site, talk about the homework. And also on top of that, give them maybe small coding assignments that you can do in pair programming to see if the person fits into the company in the way they work. So I generally see the most successful companies in recruiting. They try to spend with the candidate as much time as they can. So as a candidate, what can you do? I mean, the obvious way is to have a resume that kind of resonates with the community. So people, however, read resumes on autopilot. It's really, I mean, I think the average spent by HR people and even hiring managers is maybe less than a minute. So it would be great to make it really short, so a page per decade, I would say, is something that was reasonable. And contribution is more important than tech and framework. So show how you contributed to the product, to the company, or to the project. It's rather more important than if you use this or that framework. However, there is the problem with HR. So if there is a bigger company and if there is HR involved, you always will get the problem that you have to go through this filter first. And then you have to, obviously, mention some frameworks such that pay for what your CV to hiring managers. The worst case I experienced is when a great engineer doesn't get rewarded to the hiring manager just because the keywords are missing. Like, that's really, really terrible and should not happen at all. So let's look at a couple of chunks of text. And let's decide together if this could or bad. So first one would be, this is in a resume, the part where you talk about what you did. So design software application, including data modeling, software architecture, design, software hardware integration, user interface design, and database management. Created and launched a service that collects product opinions and recommendations from Twitter. The service finds related tweets, removes spam, analyzes sentiment, and creates a structured database of everything that was said about a particular product. The service is exposed as a consumer website and as widgets that can be embedded in online retail websites. Developed, product name, using Python and Django for marketing and allowing enthusiasts to experience, blah, evaluated and identified some operation system network stack, performance bottleneck, and latency per packet process overhead, and scalability of different network IO models to various system measurement and profiling techniques. So which one do you like best? Three, four. Two. This page, the best for reading this, was the four which was more for the CTF? The second one is more for reading Django or whatever. Yeah, so the gentleman said, so it depends who's reading this. So number two is more for CTO and number one is more for business people. Yeah, I mean, that's obviously right. So I would also prefer two the most because you literally can, well, from this short, short text, you can see what happened and what the person did. And number four is my second favorite. And one and three is literally, especially one, I don't get, there's no information in this, so zero entropy, I don't know. So I don't get really understanding what the person did in number one. Although you're right, if you have the HR filter, then the HR person literally might like one more. This is why recruiting is hard, right? Yes? You don't see the business value. You see that if the person did something, they burned some time, but you don't see the result. So from number one, you don't see the result and you don't see the business value. I would agree, yes. So it's an interesting thing to, I mean, the best thing is just to show to other people, but probably you do this anyway, to other IT people and then you can improve. So this might be totally trivial, I say avoid typos, but there is actually data that supports it a lot. So this is also a statistic made by Eileen Lerner in her blog about, I think she looked at 8,000 resumes in her career as a tech recruiter. And the frequency of errors in typos was higher correlated with good engineering performance than other factors. And those factors include a bachelor's from Stanford. So that's BS and CS from top school. So the second biggest correlation point was if the person worked at a top company before, like Google Facebook or Twitter. So if you want to train for interviews and you want to do a Google-style interview or a company that does Google-style interviews, you just look at correcting the coding interview, interview cake, interviewing IO. So there are tons and tons of platforms. Depending on your level right now, I would calculate in like two weeks and two months that you train every evening in order to perform well for regular companies. Well, learn to communicate what you did exactly, what you're proud of in your project and what you contributed, and ask the companies how they will assess you and prepare accordingly. So don't be shy to ask exactly what the company looks for. And this is my most, I like this part most. So what many, many candidates miss is to really interview actually the company back, to really find out if it's a good place to work at. So there are a couple of questions that you probably have heard of. It's called the dual test. So do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? Do you fix bugs before you write new code? Do you have an up-to-date schedule? Do you have a specification? Do programmers have quiet working conditions? This is what I check if I go to companies to talk to them about what they need in hiring. So I look at this a lot, actually, because I see that this is one, I mean, I as a programmer, I want to have a quiet place to work. Do you use the best tools money can buy? Do you have testers? Do you do new candidates write code during interviews? Do you have hallway usability testing? And those, I mean, things you also find in stack overflow careers, so they do the same assessment. And questions I would also try to ask is if it's possible to see the source code of the company, which might be able, I mean, this would be something also to show off your code reading skills, right? And also some companies miss to invite you to go with the guys for a beer and stuff like that. So I would try to politely suggest this to get to know the company. So there are bonus questions that are really tricky. And I would ask them if you feel sympathy with the hiring manager and the interviewer. So you can ask, what is the most costly tech decision made earlier on that the company is living on right now? And where do product and feature ideas generally come from? So the first question will be more for the technical programmer. And the second question will be more for somebody who is like a product programmer. I mean, this categorization in the beginning is just like a funny, well, interesting way to look at engineers. So I kind of like to play with that. Generally, try not to ask questions that are super uninteresting about vacation days to the engineers, because their time is valuable. And just ask this to HR, please. So salary negotiations, probably something that is underestimated and should get more attention if you get a new job. So there's a couple of points that I like to recommend. So don't disclose your current salary if HR asks for it. So because, I mean, essentially this can be a benchmark against you, right? And postpone the discussion about money as long as you can, because, well, it's a benchmark. And if HR insists, then tell them you feel uncomfortable, because you want to find out how you can benefit the company and based on that, you can give a number. So I'm a fan of postponing that as much as possible. And if it's absolutely not possible, which might be the case for bigger companies, then tell them, OK, I want it to not be a benchmark. And hopefully it's fine then. So if you like, had the luck to get through the whole process, and now this important moment where salary comes, and they suggest your number, then in my experience, it's a dominant strategy if you try to just be silent. Because the other person out of social awkwardness will maybe continue talking. There is a blog post about how somebody made 15k more just by not jumping up and down after getting actually an offer that was on the upper scale of what she expected. So no matter what number you get, it's a business relationship. And in the end, you sell your time and you get the salary. So for them, you're a resource, in a way. And 5k more or less for a company that already went through the very painful process interviewing you. In the end, for them, it's absolutely irrelevant. 5k more or less. But for you, it makes the difference in five years or 10 years if you can buy a house or not. So for you, it's a big deal. And for them, it's not. So don't feel bad to ask for more. So the last thing I want to start the discussion with is long-term engineering career paths. So the old and rusty programmer, it's a thing that the very senior engineering roles at smaller or middle sized companies, they don't really exist. And people after 50, they don't really have a way to become better in their career unless they go to management. So this is something that bigger companies really solved. And this is something I tried to do research about at gitrecru.io, where I try to match companies with candidates. And if you have other ideas on recruiting, just say hi at gitrecru.io. And that's my ideas about recruiting. So. So we have a microphone. If you have any ideas that you want to contribute, you can just tell me now, or we can also chattel it. Thank you, Ivan. So we'll have a discussion if you have something to share or a question. Hi. In my experience, and maybe this is cultural, if I send an offer to a person without mentioning the salary, I get lots of rejections. So maybe this is a cultural thing, but here in Spain, companies are used to offer crap money. And if you don't start by giving a sensible offer, lots of candidates reject the offer. Have you found this, or is this something from cultural? That might be actually cultural. So I operate mainly in the Germanic part of Europe, so German-speaking countries. And I think they might be less willing to discuss salaries and talk about money compared to other cultures. At bigger places, there is also more, for instance, Zurich is very small. And the variance and standard deviation around them. So you have a mean salary, and you have a standard deviation. And in Zurich, it's super high. So for instance, I met senior engineers who make 130k, and same qualified people who make 70k. And I think that's a good thing. And same qualified people who make 70 at smaller companies because they are low-balled. And the city is small, and therefore you have not a standard number. In New York or London, you do have that. So there is less variance. And there, it might be more common also to ask for this, because then they want to check if you're aware of the market or not. Whereas in Zurich or smaller cities, it's like there is no extra numbers. It's much more dependent on the company and other factors. But yeah, it's cultural, I believe. Thanks for your talk. It was quite interesting to see the other side. So in particularly, I like when you mentioned that HR people spend less than a minute, typically, on reading a CV. Oh yeah, maybe less than 30 seconds. Maybe less than 30 seconds. So it's actually quite a similar case on our side. I guess developers spend like 20 seconds reading their recruitment mails because they all look the same. So then there's obviously a question, who should read those? Because developers don't get money for that, and HR people do. So the next question, actually, is if the HR people who do not read the CV carefully and still get money for that, do they not are incapable of getting those keywords out of the CV? Is it actually a good sign that they didn't reply to you, which means that it's probably a bad company and you wouldn't probably apply there anyway? Or you think that there could be bad HR people representing actually a good company? With your experience, is there a correlation between quality of reading out the CV and quality of the company? Probably there is a correlation. I mean, if you're good at one thing, you're good at other things. So this is actually a life question I'm asking myself. Because on the other hand, you have to say, OK, people who are good at one thing, they are generally good at other things. But then also there is this halo effect. So you shouldn't judge somebody because he's good at running, that he's also good at other things. So this is a research thing I think about a lot actually. So I can't really answer that. So one thing to remember here is that everyone who works for a company has passed through HR first. So the way HR works actually determines who's working for the company. If you don't get along with HR, you may not actually fit into that company because everyone else did. Oh, yeah. But if a company is small, there is no HR. And HR comes in later. And I have an example. In Zurich, a great company, they just hired a internal recruiter. And this guy would turn down. I recommended that my guy work at Mozilla, and he would turn him down. And I was rather pissed because this HR guy is a new hire, and I'm very sure that this is a fit. And he's like, oh, whatever, Mozilla. I mean, HR is definitely, well, anybody who is not able to write a read code is bad at assessing engineers. So what I think, I'm even planning to do a workshop for HR people to learn about computer science, JavaScript, Python, and it would be, I think, a two to three day course, and it might be successful. Actually, this kind of training, who would attend this, good HR people, right? So I believe you need some kind of training to assess engineers. And actually, HR, in more traditional companies, they think in a way that at other jobs, there is really true that the engineers have to back, the candidates have to back to get the job. But in our domain, engineering, it's the other way around, companies have to back to get engineers. And many HR people don't get this. Hi. First, thanks for the stock. I'm here. And from my experience, I can confirm that many of the things that you recommended work. I also recommend sending out some test assignments before even looking at CVs. So that's what we do, for instance. Like, we receive applications, but we first actually see some work of the person before checking with the CV. But that's not the point I wanted to make. The point I wanted to make is one way to suck at IT recruiting is to be exclusive, to only talk to a certain type of engineer. And of course, talking about diversity, which is a very broad topic. It's about gender. It's about orientation. It's about being big, small. It's about race. It's about many things. And one of the tricky things about those very successful ways of recruiting for IT is referrals, for instance. So if you get to your employees to refer their friends and their former mates, well, they will probably recommend similar people to them. So you will lack in diversity. And also, if you go for beers with candidates, or the guys are going for beer with candidates, you also are missing out on opportunities to meet informally some candidates from other backgrounds who don't drink alcohol for many reasons. Could be health, could be religion, could be pregnancy, or whatever. So do you have any tricks to solve these type of issues? So what we try is to have informal time for meeting candidates during the day, where people are coming to the office. So that's the lunch. That's a coffee break, something like this. But maybe you have some other ideas in your experience. Yeah, so there are a big tendency towards all you said. So to hire only university graduates from this school and be very in one direction, that's very bad and inefficient. So my dream as a tech recruiter is to help to make the market more efficient, so to make it perfect. So everybody finds a job that he or she likes. And this is a very important aspect you should focus on. And people who are not fitting your profile in order to look at all available candidates. So I totally support what you just said. No. Hi, Tom. Thanks again for the talk. It was great. So I have two questions. One is regarding your suggestion that when we get an offer, we should shut up. So does that apply also for emails? I mean, we should wait, or how does it work? Sometimes the offer comes in an email. What's that, again? The offer comes in an email. Yeah, so what do we do? So you don't, I mean, the social, you are quiet because you exploit social norms. If you don't reply to an email, I'm not sure if this also applies. You could, for instance, call back rather quickly and say that you are not sure about this at that point if she or he can repeat the aspect of the offer. And then you, again, have the situation where you can let the other person know that you are happy but not too happy. So in general, it's better to negotiate this in person. That's your advice. And I have another thing that I learned doing this business, is that if people don't reply your emails, that doesn't mean that they are not interested. So it happens so often. I'm hiring also for startups. So there was one founder that just recently we talked, and he was like, yeah, great. Yeah, you're a great recruiter. We want to work with you because you're a software engineer, blah, blah, blah. Send me an email. And then I'll send him an email. Hey, it was great chatting to you. It was super cool. I have this and this terms and conditions. Let's work together. Silence. Like for a week. So I write again. Hey, how are you? I just wanted to ask if you received my email, and we can continue further talking about this cooperation that we talked about last week. Silence. And I have one example where I did this for 32 times. And the third time, it was like, hey, Ivan, I was really sorry that I couldn't answer. I had to launch this rocket ship, and stuff like this happened here and there. And I'm so happy that you got back to me. Let's meet for dinner. So it's really important. So when people tell me I'm not interested, I stop mailing. But you have to keep up the email conversation because people have other things to do, and you're not the most important thing in their lives usually. So this is probably one of the biggest learnings doing this. Can I? What's that? Could you please repeat that speech that I got for a statistical length of your flight? I'm not sure, but I'll get to it. Could you please come back to the later? Yeah. Hi. So I'm guessing you might be a bit biased. But when for somebody seeking a job, would you suggest going through a recruiter as opposed to job offerings given directly by the companies? And my motivation for that is that so far my experience with recruiters was a bit dreadful. They were very aggressive, and I kind of felt like I'm in a blockbuster for a movie just with me being the main character, and that kind of felt kind of weird. And I had most much better. Is it OK to perceive the recruiter as the end, as somebody who is working for that company, as trying to get my salary lower as opposed? Are you talking about external recruiters, third party? Yeah. Yeah. So I think about to get recruit out of my project's name because of this. Because the word recruit, a recruiter has such a negative connotation that has a reason. That companies are, or recruiting companies are usually pretty bad. The whole sphere has a bad connotation because people do keyword matching, not respectful, well, towards the engineers. So I would suggest working with a recruiter if you like to know the market rates, which companies are good in, let's say, a new city, and detailed information about the market, then it could make sense to work with a recruiter. But then again, there is a big variance in quality. So I try to be on the top level in quality in what I do. So I also don't work with not so good companies. I only work with companies that I myself as an engineer would work at. So you have to decide for yourself. So if you want to move to another city and you want to know the market rates, it might not hurt to exploit the recruiters to get to know the market rates for salary and stuff. Thank you. Hello. I just had a question about, well, as a team, we interview people sometimes. And what are the good coding questions to ask? Well, it's a bit wide. But I read something about classical things like Fibonacci, et cetera, were not necessarily good. And well, it was quite an open question. But are there some things that you see more pertinent? Well, this is. So what are good questions for interviews? So the one part that always works is to ask about projects the person did. So this is even a question. I like the questions most that are not getting easier when you know them. And those are questions like if you can talk about a project that you did in the past. And if you're familiar with the technology stack, you can dive deeper in an area. And you see how much deep you can go with what you're asking. And this is a way that always works. Yeah, theoretical questions, maybe not. Then. Yeah, those Google questions are good for Google and other companies that can afford to ask them. I mean, yeah. So communication opener, is it a recording of projects that you're talking about? It's also a good communication opener to open the interview with a question about projects that the interviewer has done before. So they have something to comfortably talk about. Exactly. Yes, absolutely. Hi. There is an area we not cover here. And I want to ask, can you share some tips how to find a good freelancer? How to find a good freelancer. He's here. Wear a t-shirt that says I'm looking for now. I don't. I'm not sure there are different platforms on the internet. I think the most important part is the freelancers to build trust as fast as you can. So for instance, if I need support with some engineering, I always go on Upwork, on similar platforms. And I look only for people from Ukraine. Because then I can say, hey, I've been born in Kiev. Blah, blah, blah, blah, blah, blah, about our common heritage. And then there is some sort of trust. And this is a little thing that I do. Do we have any other questions? Two, three. How much time we've left? Five minutes. OK, so then we make one, two, three. But the lady was first, actually. I think you don't you work for hire.com? This is not a pitch for hired at all. Although I do recommend that you all go to it. But we're only in London, Paris, and Berlin. So if you're not looking to go there, then don't worry about it. So my experience mostly comes from the US. I only moved to Europe about a year ago. But I wanted to answer. Sorry, I'll talk later. I wanted to answer your question about external recruiters. What are the problems with why they're so aggressive and horrible is because most of the time they don't actually get a salary. Their salary comes entirely from commission based on when they place you. So they're going to be as aggressive as they can to get you to accept an offer. And to get you to accept an offer with the highest salary you can possibly get because that increases their livelihood, of course. If they don't make a single placement in a year, they don't make any money that year, which really sucks. I also want to say thank you for the presentation and that if you all are very interested in the data behind recruiting, he had mentioned Aline Lerner's blog. She is a phenomenal engineer. She has about 10 years of Python data science background. And her blog is all about that in ways the best practices when it comes to actually sourcing those emails and sourcing from LinkedIn. If you don't want to use any external platforms, it's really fantastic. And my last question to you, sorry, really quickly, was what's the most creative interview process that you've ever seen? Because a lot of the times when a startup does interview, these one-on-one interview processes I've noticed just don't work out quite well. It's very intimidating for the individual. And if you don't know the runtime of a specific algorithm, you shouldn't have to know that on the spot, whereas some people like Google really do think that that's important. So have you seen any that are particularly outstanding that really get to the programming techniques of the startup itself? So I mean, I'm not sure about the process, but the way people were hired. So my absolutely favorite story is a US kid. He's actually on this conference and not in this room. So he was essentially dreaming of becoming a software engineer until he was 12. And he became homeless when he was 14. So really, really poor growing up in Illinois and really living on the streets until he was 23. And then he started to work as a mechanic and then looked at TV that, well, software engineers actually make decent money. And then he found a C++ book in a thrift store for $1.50. And every night after doing his mechanics job, he did this C++ book where everything was like, this code is deprecated, this code is deprecated, because the book was so old. And then he somehow ended up on a conference where he made a German company. And this German company, seven months later, they hired him right away after he had enough experience in Python and Django. So this is like, this goes back to what you said, like exceptional candidates that are exceptional in their own way, like being homeless and stuff. This experience is like maybe something special. So there are two more questions over here in front. Hi, I have two questions. First, is the GitHub search available online, or do you have it just locally? So the API can be crawled? Oh, yeah, but about the website. Is it available online, or the Hackathon website you've built? This one? Yeah. No, it's localhost. And the second question I have is about actually your website. You mentioned that GitHub's crude services and trainings, your site, are free. But we take 20% of each revenue that you make with ours. So what does that mean? Oh, OK. So what I try to build is, so Git recruit should become a tool to enable to find engineers. But I want to have partners in each city that use the Git recruit tool. So for instance, this tool and other tools we build, like a CRM and a applicants tracking system, that you can use for free. So this is what I'm building with a friend. And then the person who does the recruiting in Bilbao, in Milan, in Munich, he gets all this training and the software for free. And then Git recruit headquarters gets a cut of this. I'm not sure how much the cut is, or stuff like that. So this guy already looked on the website, and this was a very specific answer. So sorry for the rest of you. Is there any questions? So no more questions? Thank you, Ivan. And thank you, everybody, for your interest.
Iwan Gulenko - How to make IT-recruiting suck less. I am a programmer and I am on a mission to make IT-recruiting suck less. This talk should be useful for both hiring managers and job-seekers. We will assess the status-quo of hiring engineers and talk about resumes, coding questions and tasks that firms make up to assess engineers. Also, we'll discuss salary negotiation best-practises from a candidate perspective.
10.5446/21156 (DOI)
Now we have two speakers, Kukan and Peters, about algorithmic trading with Python. Very interesting. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Hi. This talk is on algorithmic trading with Python. Just to clarify some terms, by trading, I mean, buying and selling financial instruments on financial exchanges. By algorithmic, I mean, there is a computer program running some kind of an algorithm that decides on what to buy and what to sell in these markets. In Winter Capital, we manage about $35 billion using a platform, primarily constructed of Python or largely constructed of Python. We also use all the Python for researching and data analysis for those activities. The talk is gonna go roughly as follows. We'll do a quick company overview. We'll have a little bit of an overview into our research activities and the trading pipeline itself. And then yours is gonna go into quite a bit of detail about how and where we use Python. My name is Isto Kuczhan. I'm the head of Core Technology at Winter and yours is a head of a very exciting new project we have the Data Pipeline Project and particularly is a heavy use of Python there. If you've come across Winter in the past, you may have seen us call a Quant Fund, a Algotrading outfit, a Hedge Fund, Commodity Trading Advisory. All of those are valid, but I think a lot would much rather be described as an investment manager to the company to use a scientific method to conduct investment. What do we mean by scientific? Well, a empirical study of a lot of uses, a lot of uses of empiricism, hypothesis testing, experiment construction and statistical inference in how we derive strategies that then we trade upon. We have around 100 researchers which is about a quarter of the company, typically with a background in academia, academics, ex academics or postdocs. These are organized in teams. A lot of the activities be reviewed. So it's a fairly open activity in how we arrive at the signals. Our company is an engineering, which again is a fairly empirical discipline itself. Geographically, we're primarily a UK company, so roughly 400 stuff in the UK, mainly in London and some in Oxford, but we're expanding globally four offices in Asia and two in the US. A lot of those offices are not just sales offices, a lot of those offices are actively growing. So for example, we have a new data labs outfit in San Francisco looking at a satiric data. Okay, so this is a Python conference. So what about Python and Winton? Winton's been active for about 20 years and for the initial few years, the systems were far simpler than what they are now and effectively ran of an Excel spreadsheet. Now of course, then gradually C++ extension started creeping into that Excel spreadsheet and gradually those things were taken out of the Excel and formalized as a set of object for the simulation framework. And that was and remains the core modeling tool and also execution tool for our training systems. But we found that as the framework gained flexibility, we needed Python to start combining these objects in a more flexible way. So for example, if I want to do a Delta series and then a Volatility series, I would be using the same two objects as I would if I were going to do a Volatility series and then a Delta on the Volatility, but I wanted to combine them in a different manner. So Python was quite useful to do that. As soon as we started using Python in that manner, it became very attractive for us to start writing strategies or parser strategies in Python themselves. And from then on, it never really stopped. So over the last 10 years, we've adopted Python for constructing the training platform but also increasing in data analysis and in research. So I'm sort of starting to create these two terms research investment technology. We have quite a strict distinction of what is exploratory activity and what is training activity. So exploratory activity research is looking to things that may lead to something or often will not. And again, the research itself has conducted along three lines, I would say core research, which is research into signals and let's call them market behaviors. Data research, which is research into data and properties of that data and then in an extended sense, deriving data analytics like volatility profiles, volume profiles, correlations from that data directly. And we now, as I said before, we have a data labs section in San Francisco, which looks at esoteric data sets, speculative data sets like satellite imagery or the deep dark corners of the internet. But once signals are derived, we transfer them into the investment technology section. Now there's a much more rigorous exercise where we have a quite static trading pipeline. And again, the key there is that you can do things in a very repeatable, very reliable, very secure manner with some sign off. And the data pipeline itself is composed of roughly four stages. Let's call them data management, signal generation, auto management and post-trade monitoring. Now Python is used a lot in research, but it's also used now extensively in data management and signal generation parts of the trading pipeline. With data management, typically the things we do in the trading pipeline is obtain large sets of data, clean them, transform them into the things we need. We use things like versioning to make sure that we can repeatedly see data as it changes. Python underlines all that architecture. For the signal generation part of the pipeline, we also use Python extensively. So Python still drives simulation, which is a time series transformation engine. And increasingly so. Python's also interface to a data storage engine called a time series store. And Nearest is gonna go into that in a bit more detail. Right, so I'll give a bit more detail about how we actually use Python. Some low level detail, where exactly it sits in our stack. So the main reason we use Python really is because it presents a quite a friendly face to research. Our low level code is all in C++ typically, so execution, our simulation platform. It's not something you want the researcher to write. So we expose all our codes, all the APIs are typically in Python. There are a few other options, but Python is definitely the main choice. It's not just for research because it's such a nice and programmatic interface. We use it for monitoring, typically to serve as a web service as well, and directly in signal generation. The reason we chose Python is extremely well known. It's very easy to learn. If you don't know Python, probably not too long before you do. And it has a lot of, just comes with a lot of support for data analysis visualization. So it's quite nice to, as a researcher, just to get all that batteries included. So this is a fairly large scale overview of our trading pipeline. There's a few kind of core principles to it. The whole thing is event-driven. So something happens, which causes something else to happen. In this case, for example, we get our data, for example, from Bloomberg. As soon as the data is there, automatically, we construct our equities prices or futures prices. Once that's done, automatically, all our strategies kick off. And that kind of event-driven flow sits really at the core of the winter technology these days. And then we have, as Isdok mentioned, the simulation sits at the right bottom there. So whilst our, winter is pretty much a graph, it's a real-time graph. It just sits there, all services, listening to stuff happening. But we have the simulation, which is kind of like a, in memory, offline graph. So essentially, it's really catered, it's really designed to do kind of time series analysis. So you kind of spin up a trading system. That will kick off one of these simulations. You run them, you can tear it down, you can serialize it. And that's kind of the other main technology that we have. So I'll give a bit more detail about both the real-time graph, which we call COMETS, and then the kind of simulation strategies to test simulations, back-test simulations, which is written in C++. First simulation, it's written entirely in C++. It's been going on for about 10 years now, I think, right after we moved away from Excel, pretty much. If you just ignore the left-hand side, it's kind of a similar concept to what these days appears, things like TensorFlow. So essentially, it's kind of like a graph. It's very well optimized. It's, in our case, strongly typed. So you can just not feed everything in everything. It's strongly typed data. There's an example there of a graph. It's quite a simple one. Two data sets, two data series feed into something like a formula. It can be the sum of these two series, and then you calculate that thing. Now that's all running remotely on a calculation server. Typically, these things can be thousands, tens of thousands of assets. You don't want to run that on your local machine. We run that on big calculation servers. But we expose, on the left-hand side, the Python client. So any user, any researcher, can just connect or launch or spawn off any of these simulations, connect to it, and has full control over the remote simulation. So there's actually an example there on the left. It's a real Python script. So the first thing it does, it starts the remote session, which is going to cause one of these simulations to be constructed and launched on the server. Then it constructs these two-time series, and it constructs a formula, and then it calculates it. That's the only thing you have to write, and you have full control over the simulation. This means researchers don't need to know any C++. Anything they need really is in the simulation. It comes with it. It comes with training systems. It comes with universes, all the kind of stuff we need. And essentially gives them fairly high-level control over anything they need to do. A little bit about the technology. I'm not going to go too deep here. The bindings, the Python bindings, are extremely lightweight. So they don't know anything about simulation per se. As soon as they launch a simulation, they get everything they need from that simulation. They populate your Python clients with all the objects. The classes are dynamically generated. The objects are spawned into your namespace. If you create new objects, they are created both on the remote client and on the local client. Essentially, it gives you full control locally, as if you were doing it remotely. It's very friendly in Python. So all the data is returned as Pandas series, data frames, all that kind of stuff. One thing you can't do with the simulation bindings is, once you can control the graph, so you're kind of limited to these things like formula or value-based series or universes or particular training systems that technology has implemented in C++. What you can't really do this way is if you have a complete, outlandish training system that you want to try and you want to kind of plug it into this graph, if it doesn't fit this kind of formula or data, they're kind of stuck. And for that, we designed embedded Python. So what you can actually do is, in Python, write one of these objects that run directly in the simulation graph. So from that on, anybody can launch them remotely, run your training system, and you don't have to write any C++. You can just contribute your Python code and everybody can run it. It wouldn't normally go into training, but this is more intended for rapid prototyping. A researcher can pretty much build their training system in Python, test it in the simulation, which means they can back that system in 1970 to now. They don't have to write C++, because there you might often have to wait like a month or two months for technology to actually implement it. It's not a really good turnaround. So they can just build their thing, run it, test it. And if you're happy with it, then we can still implement it in C++ afterwards. That's kind of the idea. It's definitely around to rapid prototyping, although some of it is actually in training as well. The technology there, unsurprisingly, the C++ executable hosts a Python interpreter. We use Boost Python to do the marshalling. All the data is exposed in NumPy, so we use the NumPy C API for performance. Yeah, essentially you've got full control to these embedded Python for making your own Python training system available in the C++ back-ends. It's extremely powerful. So that's kind of the simulation. As Isdok mentioned, we have this problem that we need to shift lots of time series back and forth. There's an enormous amount of time series to be saved. We have hundreds of thousands of assets. You need to be able to very quickly load and write these to a database. Things like SQL are way too slow, because we do so much historical back-testing. We have to load all the data for 300,000 securities from 1970s now in memory or distributors and then write the results back. So what we designed, at the time when we started this, there wasn't really a good alternative. So we built our versioned and the duplicate data storage, which I'm not going to go in too much detail, but it's a columnar format. So it essentially is super effective for storing lots and lots of time series and lots of columns very effectively they're typed. It's backed by MongoDB. That's kind of an implementation detail. Anything that can store something from key to a binary blob would have worked. And really, keys, that's immutable data. So one thing we don't want is once you've written your data frame, if you do it in Python, all you want is to get exactly that data frame back. We don't want the data to change. If you've written something, it can never change. You always get it back exactly like that. That's it really at the core of our strategies. Obviously, if you're testing a strategy, you don't want the data to change underlying. You kind of want reproducibility. You want to know exactly what you did. You want to be doing that forever the same way. So this time series storage is really revolutionary. It actually opened up a lot of possibilities. Technology there, kind of the same pattern again. So we tend to do something low level in a really optimized way, C or C++. And then we expose all kinds of high level libraries to make it more accessible to users. So the store here is backed by MongoDB. There's a C library that sits on top of it and that kind of deals with this columnar storage so that we can essentially store it very effectively. And then we build very thin libraries on top of that. C++, C-sharp, and Python here. We're building a JVM one as well. And essentially, these can be accessed by different kinds of technologies. C++ would typically be the simulation. But a researcher might use the Python library. And rather than having to deal with this kind of low level columnar storage, a researcher can just put data frames in there. So we'll get translated into C arrays and then get given back to you as data frames. Yeah, so small implementation details about how we've done this. We used to see the foreign function interface. And the nice thing is that it's such a friendly Python interface. You give it a data frame and you get a data frame back. You don't need to know about any kind of table formats or type conversions and all that kind of stuff. Comet transforms. Like I told you in the beginning, Winton is essentially a graph. And simulations, services are sitting there. They're waiting for stuff to happen. They react from inputs. They produce outputs. The next thing is going to listen to those outputs and go on again. So this is what we call the Comet transform system. It's microservice-based. They sit waiting there on a topic from a bus. It's Kafka, actually. There's an example there. It's super simple. We get the data from Bloomberg. We write it to the store. We announce that we've done that. Yeah, go. The equities transformation is going to pick that up. It's going to write to the store. And then as soon as that's done, our strategies can start kicking in. We've got loads of strategies. So there might be five strategies waiting for the equities prices. They're all going to kick off simultaneously, distributed. And as soon as they're done, we can start going into execution. This is the next to last slide. A little bit about the technology with Comet transforms and kind of bringing it all a bit together. All the red things are where we use Python. Everything that's not red is kind of low level and exposes Python as its external API. So all our events are posted on Kafka. We use protobufs throughout for the communication. It's really nice for the strongly typed and the kind of you can increment your schema. And then our service stack currently is in C-sharp. So it's a proprietary service stack that essentially deals with getting the protobufs, translating the protobufs. But the Comet transforms themselves are hosted by the C-sharp, our Python interpreters. So anybody can write anything and become part of the graph that is Winton just by writing some Python codes. That Python interpreter might be a strategy that launches a simulation that will use the same bindings, as I explained in the beginning. That simulation can host your own trading system that you've written that would be an embedded Python. The simulation will read and write this data from the version store, which is our efficient way of storing time series. And that then again can use the Python library, can be read by the Python store library, so anybody can read that data that's been written by the simulation. Everybody has access to it through the Python store libraries. Whilst all this is quite complicated, there's a lot of technologies going around, the team is always relatively the same. It's low level code that is really optimized, tends to be written in C or C++. Implementation details are quite proprietary. It can be protobuf, it can be Kafka. But as a user, you're only exposed to well chosen APIs that we've defined. They're quite flexible, they're programmatic, because of this Python. You can do anything you want, but it is tailored and it's accessible. And by using, by providing that as an interface, it's still extremely performant. And we find that this is really good. So roughly as that said, it's all good. We think this works really well. We're quite happy with the system. Python throughout, if you're a researcher or if you're in business, you wouldn't see anything else, but Python. You'll just see Python. You don't even need to know that there's any C code under there. It's the primary interface really for data management and signal generation. Because it gives such fine great control, you don't really need anything else. There's no need to go into C or C++. You can, and that's what technology does if it needs to go really fast. But as a researcher, typically you don't need it. So you can define all your own data transformations. You can do with the data whatever you want. You can store data. You can retrieve data. You've guaranteed it will never change. There's a time series store. As discussed, it's backed by a very low level C++ code that is implemented by technology and owned by technology. And the main reason we're doing this is because it's so great for analysis visualization, rapid prototyping. Maintainability, I mean that because it's such a programmatic interface to all the underlying codes. You can write web services. You can write monitoring systems. Everybody can essentially start contributing to them in Python, which means we have an enormous view on what is actually going on in Windows trading systems. Yeah, so it's all good. And that's also all I had. So thank you. Yes, one minute. OK, oh, many questions. OK, so I just start from here. Hi, thanks for the talk. So you're using C and C++ code because you're in the high frequency trading or is this legacy code? No, neither, actually. So we are in low frequency trading. And it's definitely not legacy. So even though it is low frequency, we do continuous historical back testing. So it means even though we might just trade one new data point, we want to be able to very quickly test the simulation all the way from 1970 to now. So yeah, so that's the main reason. It has to go fast because we test the whole of history. But we do trade over periods of months. So common tools like pandas and similar don't meet that requirements. Sorry? So tools like pandas and similar don't meet that requirements like for huge back testing? I think we found that it probably doesn't cover our needs. We run the trading systems that researchers contribute are in pandas. And so a number of them go into trading. So things like the tracking error control, it does run on pandas and it does run into trading. It doesn't actually have a C back end. But we find that if things need to go really fast and we find we need that kind of speed, then the C implementation is still considerably faster to the extent that it's worth doing it. Hi. Have you open sourced your time series store? And if you haven't, why not? Open source which one? Your time series store for data. No, we haven't open sourced that for no particular reason. So there is actually initially we, it's only quite recently that we started looking into open source. I think this is actually on the list of potentially being open source. There's nothing particularly trading specific about it. It is very generally applicable. So yeah, that might come up. More questions? So your slides suggest Python 2, probably 2.7. It is exactly 2.7. Why not 3? And what is the incremental cost of migrating to 3? So there's a lot of code in, there's an enormous legacy code base in Python 2. Upgrading it because we have all the C extensions right now is not trivial. But it is being actively pursued now. So all the new code that we're going to start developing will be in Python 3. And then we should gradually migrate all the stuff. The problem is there's not really an enormous business case right now. It's a lot of efforts. And we don't necessarily get a lot of it back at this very moment. But we definitely realize that especially as a support is going to be dropped, we will have to have moved to Python 3. So that's going to be our main reason. And obviously there's a lot of features that would be considerably better, especially with the multiprocessing and stuff. Yeah, that's for me personally at least. We need a good business case to move really. Hi there. Thanks for the presentation. Which exchanges do you trade on? And how many bytes of historical data do you have? Which exchanges? We traded all the exchanges, but that's more issues of stuff. Sorry, we'll pause here in about 20 or 30 some exchanges. I won't go into this, but American equities, European equities, Asian equities, futures, FX now, and fixed income as well. How much data we have? That's a, depending how we describe it, typically we just probably write a billion numbers a day. We have a petabyte class total capacity. But that's a rough measure. How many we need is a different story, but that's how much we have. Thank you for your great talk. I have two questions. There is any authentication or authorization system? There is some researchers can see only a few machines or something like that? Yes, sorry. And how does it work? The API does it? We have our own proprietary authorization system. It's basically token based. And then we have the SQL server. So you got the SQL code is backed by Microsoft. So we got the authentication there. And then we got Mongo database, which is backed by certificates. So it's certificate based authorization. OK. And my second question is, when a researcher wants data, I guess it go to the microservice and get a scroll scan operations. So all your data is going through HTTP. And it does how it's so fast. Because millions of events can be sent. So a researcher would actually go directly to the store. So they make a direct connection to Mongo. They wouldn't necessarily have to be mediated. They can. And we're actually considering to build high performance services in the middle, like GRPC based or something. But right now, the library that we expose to researchers that sits on top of Mongo makes direct connection to Mongo. So that's why it's so fast. Yes. So how do authentication works if by the certificates? That's the certificates. MongoDB? Yes. OK. Thank you. More questions? Just a fairly semi-related question. But have cryptocurrencies or anything like that crossed your radars yet? Crossed the radar and then left it, I guess. It's not something we do yet. Thank you. More questions? We have still some time. I have a question. So if nobody. OK. What's the real, why is there a Kafka thing and a proto-buff? So what's the issue with this one? There are no other arrows than just this one. Kafka is in the message box there. So you see the message was at the top there. We use Kafka to back all our events. So we have pop-stop style events, which means any consumer can connect to any event that happens. So we needed a pop-stop message box, essentially. We chose Kafka. And we put proto-buffs on the wire because it's strongly typed, and it's actually fairly compact. So if we need to send a lot of data over it, then Kafka plus proto-buff is actually a really good combination. So essentially, Kafka sits really at the core of Windows. All the events go to Kafka, and everybody can chip in. Can you give an example of such an event? So is it a trade, or what is it? It's at different scales. One thing that can be announced is where Bloomberg says, I'm done. I've actually downloaded Bloomberg data. Go find it in the store. But at the lower level, actually, we do send every single piece of information across Kafka as well. So that's before this. All the data that we ingest that we download is streamed over Kafka. And then depending on who's interested, it can be stored in Mongo. It can be stored somewhere else. It can be actually transformed. We can run tests on it. So all the data goes over the bus as an event itself as well. Thank you again. I just wanted to know, why aren't you using any event-driven infrastructures such as Apache Storm or something like that? It looks like it's a perfect solution. It's possible, yeah. We are actually investigating things like Storm Spark. Flink all of them. Do you have something to say about that? We're a company that's 20 years old. So there's a lot of technology that comes on the radar that, of course, you would immediately like to have, but you can't because it takes time to migrate. And you need the business case to migrate as well. So something being unisex is not a business case. Of course, having $35 billion on the management also means there's a lot of risk. Making a small mistake on such an investment just so that you can get sex new technologies, again, not something that's very easy to justify. So we do like adoption of new technology, but we have to be cautious at the same time. Hi. So I'm interested in how are you testing technical systems like this? Because I could argue that there are 1,000 things that can go wrong. Yeah. It's a distributed system. It's a real-time distributed system. So what's your approach to testing? When I say that initially we gain so much by the immutability of data, that's definitely one thing. If you know that your data is not going to change, then you don't have race conditions about this might need to be right before this reads. So immutability is definitely one of the core principles. And then everything is strictly event-driven. It's strictly a DAC. So it means that because everything is defined by the events, you can write extremely good tests. The whole history can be reconstructed from the events of Wynton. More information. All the simulations are run every day for the entire history. So for example, for our simulation today, we'll compare the simulation up to the previous point, let's say yesterday, and we'll make sure that every single data point in the entire history of the simulation is the same. And the incremental daily step is also usually human-verified. There's still some human interaction. Not because it's needed, but because it's a silent process. So there's a checkpoint across to this human. More questions? Yes? So just say if you're tired. I can go on forever. Thank you. There was a slide with C sharp, using C++ and Python all together. The main and the other one probably, whether it was service usage as well. Anyway, the question was, yeah, exactly, how the communication is done between the components, between different languages, and so on. Sorry, can we repeat that? So once again, the question is how the communication is done between the libraries in different languages. The communication between the libraries or in the company? Between the libraries in C++, in Python, and C. I don't know the details about all of them. I do know in Python, we use the C form function interface. Essentially, what we always aim for is a fairly simple C89, I think, interface the C library, which compasses all the logic. And then the other libraries are built on top of that. They tend to be fairly high level and just to the mediation, the marshalling of data. So all the logic tends to be in the lowest layer. And then all the other layers are just representing data in something that is useful for the language itself. Does that answer it roughly? OK. No more, there is still one question. We are also at the boot, by the way, if you have more questions afterwards. Yeah, maybe this is then the last question. Hello. I would like to ask you if you save some pre-computed data in the history. Save some signals. You take the source data and compute something from them from all the history. Are you pre-computed everything every time, every day? Yeah, so that's what Istok alluded to. In order to make sure it fits in with your question, in order to make sure that nothing has actually gone wrong in the meantime, that no bug has been introduced, we rerun everything from the beginning of history pretty much to now, to yesterday. We check that everything is exactly the same. And only then do we allow the newly generated points to go through. Yes. It gives us an enormous amount of certainty that nothing has gone wrong. So you don't have problems with the immutable data that they can change because you improve your algorithm or find some error in the algorithm? It can happen. And then we rebase line, essentially. So if we do introduce a change, it has to be in a controlled fashion. So the only thing we want to avoid is uncontrolled change. But of course, if there's an improvement, then we will rebase line the system. Thank you. OK. So I think very nice talk, very interesting. Thank you. Thank you.
iztok kucan/Joris Peeters - Algorithmic Trading with Python This is a look behind the scenes at Winton Capital Management- one of Europe’s most successful systematic investment managers. The talk will mainly focus on how Python gives researchers fine-grained control over the data and trading systems, without requiring them to interact directly with the underlying, highly-optimised technology. ----- Have you ever wondered what technologies are used in a systematic trading system that utilises computer models and accounts for the majority of trading on the stock market? This is a look behind the scenes at Winton Capital Management- one of Europe’s most successful systematic investment managers. In this talk, we’ll run through an overview of Winton’s trading infrastructure, including data management, signal generation and execution of orders on global exchanges. The talk will mainly focus on how Python gives researchers fine-grained control over the data and trading systems, without requiring them to interact directly with the underlying, highly- optimised technology.
10.5446/21158 (DOI)
I'm really happy to introduce James Rowlands. He has been, we've just been talking, and I think it's so amazing. He's been working in the gravitational waves community for 18 years, like all his professional life, and he's going to tell you more about his research and the project and discoveries and how Python was involved. So please very welcome James Rowlands. Thank you. Hello, can anybody hear me? How's it going? Ooh, very good. All right, so yes, I'm James Rowlands. I work for the LIGO project, which is a project to detect gravitational waves. Brief overview, it consists of these two big interferometers, which I'll describe in a minute. It's an NSF project in the United States, although we have many international collaborators, 60 institutions, 1,000 individuals around the world. The two observatories are separated. One is in Washington State, and the other is Louisiana, 3,000 kilometers, 10 milliseconds. All right, so a little background about what are gravitational waves. So Einstein's general theory of relativity, which is one of the most successful physics theories in history, basically. It's been 100 years, this year is the anniversary, 100-year anniversary, and it's been basically unaltered since he gave birth to it. It's really incredible. And basically, the idea is that the curvature, that gravity is curvature of space-time. So masses in space cause the space-time to curve, and then the curved space-time causes masses to change their trajectory. And this is the only equation in the talk, I promise, and this is the Einstein's equation. It's the curvature is described by this mathematical object called a tensor over here, and then the mass-energy content, like what is in space is on the other side. And it's related by this very, very small factor, which indicates that space is very, very stiff. It's very hard to bend. So one of the interesting predictions of general relativity is that it predicts there should be waves of gravity. And I like this animation because it looks kind of a little bit sexy. So you can imagine that what's happening here is that through the central axis of this tube is the movement of a gravitational wave. A wave is propagating through this tube, or in this direction, and what you can see, what the tube is doing is showing how the surrounding space-time is going to move. So perpendicular to the direction of travel, the space is compressing in one dimension and expanding in the other, and then as the wave moves, it does the opposite. So it kind of does that squeezing and stretching motion. So if you look at the cross-section of the tube and you put some masses at the edges, you'll see that the masses, because the space is bending, are going to move with the space. So how can we use this effect to look for the gravitational waves? So what we do is we make this device called an interferometer. We can use light to measure these distances in space. So you imagine these two. We have here a laser, a beam splitter, and then two mirrors. And if we shoot the laser beam at the beam splitter, it splits the light. The light goes to the two mirrors and bounces off, then comes back to the beam splitter where it is recombined. And what happens is you take this very interesting property of light, which is that it can interfere with itself, and what you can do is very precisely measure the relative separation of these two end mirrors. So you can see the waves going in the two arms. They're out of phase at the output port, but then when the end mirrors move, they become in phase, and so you get the light goes up and down at the output port. It's a very simple, elegant concept. And so how do we use this? So we take this very simple concept of laser beam mirrors, and after many, many years of development, we made it much more complicated, because, as I'll explain shortly, it's this very complicated optical system now, and the point of all of these extra optics, these extra folding of the light, is to try to amplify the signal. So one thing I'll just point out is that instead of just having the light go down to the end and bounce off a mirror and come back, we actually have cavities in the arms, which allow the light to build up. It bounces in the arms many, many times, and that amplifies the effect of the end mirrors moving. And so then eventually, after many decades of research, this idea, by the way, to detect gravitational waves with these Michelson interferometers came in the 60s. And so the physicists who kind of came up with this idea started to make very small experiments that were only a meter. Well, initially they tried to detect them with bars, but I won't go into the whole history, but then they started to make small interferometers, and then they kept getting bigger and bigger and bigger until we got to LIGO, where the length of these arms is four kilometers long. So this from here all the way to the end here is four kilometers. We have two detectors, like I said. One is in the desert in Washington, and the other is in the swamp in Louisiana. So this is what it looks like inside. So this here is inside this central building. Okay, if we kind of zoom on to the inside there. It's a big vacuum system. So the whole interferometer is enclosed in this big vacuum. These big chambers here hold the mirrors. Here's a person for scale. So it's very big inside here. So this is one of the mirrors. So you can see at the bottom here, that's the mirror. The red thing is the mirror. It's about this big. It weighs 40 kilograms. It's suspended, so it's not firmly attached to the ground because of course the ground moves a lot. And we don't want the motion of the ground to confuse the instrument and make it think it's a gravitational wave. So we isolate the mirrors from the ground with these very complicated seismic isolation systems. There's actually this mirror is hung from this mass, which is hung from another mass here, which is hung from another mass here, which is hung from this table, which has active seismic isolation system. And so this is one of the core mirrors. This is our laser, which can output over 100 watts of continuous light power. That doesn't sound like much because you think of a light bulb in your house as 100 watts, but the light bulb is outputting light in all directions. And this laser beam is focused into a very tiny spot. And if you were to get hit by it, it would not be fun for you. Here's what it looks like at the output of the interferometer. So actually where we make the detection is in this assembly here, where what we call the photo detector that measures the light is inside here. And so the light goes through the whole interferometer and comes to this chamber where it bounces around on some more optics and eventually is caught in this assembly here. And here's another picture. I think this is another very sexy picture. So this is inside the end chamber where the end test mass is. And so you can see this is the test mass here. This is what we call the end mirror. And so behind this guy here, down here, is the four kilometer long arm. And then this whole assembly here is to take a little bit of the light and it leaks through the mirror and then it bounces up in here and goes up onto another optical table up here where we measure the light. So we actually measure the light in many different places throughout the interferometer. So we're constantly getting feedback about what's going on inside the instrument. All right, so basically light is, I mean LIGO is just a transducer of the space time strain of the movement of space time to an electrical signal. That's basically what it is. It's really very similar to a microphone in a lot of ways. I mean, you know, a microphone when it gets the pressure from the air, it causes the microphone to move and we turns that into an electrical signal that we can then digitize and process and listen to. Well, it's very similar with LIGO. And interestingly, the frequency range that LIGO can detect of the motion of space time is the audio frequency range. It's exactly the same bandwidth that you hear with your ears. It's from about a little less than, from like 40 hertz up to a couple kilohertz. And so this is the, this is sort of our primary thing that we measure. We look at this, scientists in LIGO look at these plots a lot. So this is called the strain spectrum of LIGO, the spectral amplitude spectral density. And it's basically just a measure of how much power there is at each frequency in the detector. So you can see here the gray, the gray curve. This is what the strain spectrum looked like in the initial versions of LIGO. At the end of the initial LIGO project, which ended in 2009, and then in 2009, we ripped the whole instrument apart and we completely put it back together with all new components to try to make this curve go down. And then that's what we got to with this black curve, basically. This is only for one instrument. The two instruments are slightly different, obviously. So the spectrum is a little bit different, but it's mostly the same. And so this is where we are in our first observing run, which happened in 2015 with the advanced detectors. And let me just talk really briefly about why do we, what limits this. And so what we're trying to do is we're trying to make this measurement as sensitive as we possibly can. And so what we want to be limited by is actually, is physics. We don't want to be limited by any sort of technical noise sources like, you know, is our amplifier noisy. That would be a failure, basically. We would think of it as a failure if our amplifier was too noisy. And so what do we have here? So at the left side in this sort of scion trace is the seismic noise. I mean, we can try to do, we can try to suppress the seismic noise as much as we can, and that's what we do with these suspensions, with the seismic isolation tables. And so what happens is the ground is moving quite a bit, and so all of the seismic isolation, the isolation systems attenuate that motion. That's why you have this very steep drop-off in frequency here. At this green trace down here is interesting because that's actually thermal motion in the test mass. So the actual test mass, because it's not at absolute zero temperature, is going to be vibrating. All of the molecules in the test mass will be vibrating, and that is motion that, you know, will limit how much we can detect. And then the red curve is quantum mechanical noise on the light itself. So the light is not just a continuous wave. It's actually, you know, a bunch of individual photons. That's a quantum mechanical, light is a quantum mechanical object. And so those photons, the fact that they're discrete little packets of energy, has an effect on the noise. So we can't detect, you know, we don't detect a continuous stream. It's like rain, and that rain, you know, causes a noise. So where do we get at ultimately? So down at the bottom of this, this is our strain here, and over on the other side is what we call the displacement sensitivity in terms of meters. So at the bottom of this curve, we have 3 times 10 to the minus 20 meters. I'm letting everybody think about that number for a second. 3 times 10 to the minus 20 meters. That's an incredibly small number. And I've been working on LIGO for 18 years, and that number still blows my mind. So what does it mean? So here's an atom. This is an atom. It's 10 to the minus 10 meters. An atom, a hydrogen atom is 10 to the minus 10 meters. We go in, this is a proton. We like zoomed up to the edge of a proton. That's 10 to the minus 18 meters. That's still bigger than that noise we're measuring. How is that possible? It's crazy. It's nuts. I don't know how we do it. I know how we do it. So what does it sound like? So let's put it all together. This is literally just the output of the detector plugged into a speaker. I'm not kidding. We listen to this in the control room. This is the data that we take, just plugged into a speaker. It's like so stupid simple. So all of the, you hear the high-pitched things? Those are all of these very narrow lines. Those lines are because the test masses are hung by these very small fibers that are actually made of glass. They're made of the same material that the test masses. Those vibrate like a violin string. We call them violin modes. They have all of those high-pitched harmonics. Then what we can do is we can filter this. We try to filter out the low frequency. We filter out all of these lines. Then that's what we're left with. That's filtering out all of the things that we know are not gravitational waves. We get this low rumbling noise. That's what we detect. Meanwhile, 1.3 billion years ago, this happened. This is a very cool simulation from this collaboration called SXS, which is simulations of extreme space times. This is two black holes that are orbiting around each other. What's behind is just a static picture of the Milky Way galaxy, just a light of a star field in the background. All this crazy stuff you're seeing is the space time that's being curved and warped is bending the light that's coming from behind it. That's called gravitational lensing. This is just a way, because the black holes are black, they're obviously in space where there's nothing to see, really, no light. What they do is they put this picture behind it so you can see. We actually observe things like this in the universe today. We observe these gravitational lensing effects. This is obviously special because of the fact that we're seeing these black holes. They orbit around each other. As they orbit around each other, they emit gravitational waves. This is an animation also from that same collaboration of what the waves look like, or what the representation of the waves as they leave the system. You've got the two black holes orbiting around each other. This red and yellow is the waves being emitted. As they get closer, the waves get higher amplitude, the frequency gets bigger, until right at the very end you get this big burst of waves. Then you're just left with one black hole at the end. Then, on September 14th, which is funny because on September 15th, we were going to start having a science run. We were going to start observing. We were in what we call an engineering run, where we were getting ready to observe. We were very close. Basically everything was completely ready, but we hadn't just checked the box. Then this happened. This is the real signal. This is pitch shifted up so you can hear the chirp. This is actually what was measured by the instrument from black holes that look like that. This is the first detection of gravitational waves. You can see this is basically what the signal looked like in the Hanford detector, and this is what it looked like in the Livingston detector. I'll replay it one more time. You can hear the chirp at the end. The higher-pitched version is pitch shifted up so you can hear the chirping more. The chirp is the frequency getting higher as the two black holes get closer together, and then you can hear the amplitude get louder as well. This is the plot from our paper that we published. This was the first plot. You can see up there at the very top is just the waveform that we measured. It's literally just like a wave file. The next row is a numerical simulation. We took what we measured. We tried to reconstruct what we think that the signal is based on what we know about general relativity. We have lots of complicated algorithms to try to predict what the signal will look like. We think that from what we measure, we say, okay, we think these are black holes of this mass, and they should, in pure form, without the noise of the detector look like this, and so that's what's on the second row. Then we take the top row, and we subtract the second row, and we get the third row. You can see all that's left is just noise. What does that show? It shows that there's actually a pretty good match. If you take away our prediction of the signal from what we actually measure, you're left with basically nothing. We kind of use that as evidence. That's not how we prove it, but it's a nice thing to see. This is the frequency as a function of time. You can see it's about 200 millisecond long signal, and the frequency starts very low. It's something like 40, 50 hertz, and then it goes up to 300 hertz. We call this signal GW 150914, first-ever gravitational wave detection. Then, that was day minus one of our observing run, and so we kept going, and then on Boxing Day, on December 26, which was actually December 25 in the United States, everybody started to get more email all of a sudden, and we detected another event. This one is a little bit different. In the first event, the two black holes are very similar size, and this event, the size difference is bigger, and so you have one smaller that's much smaller than the other, and so the event is longer. It's over the course of a second, and we have many more cycles of the event. So this is a little bit complicated, but I just want to show it just because this is kind of what we show as evidence of the proof. These curves down here are what we call the background, and we get these curves by shifting the data from the two instruments relative to each other so that there's no causality between them. So if you shift them by... Remember I said that the light travel time between the two detectors was 10 milliseconds? Well, if I shift the data at a second, so that one is shifted for one second relative to the other, there's no way that something that's traveling at the speed of light is going to have a coincident signal in both of those things. So that's how we generate this background. We shift the data and then look for gravitational waves, and we see this very sort of expected random signals, and then these are what we detected during the first observing run. So over here on the right is the first signal, which is just screamingly loud. It's like really loud. We never expected we would see signals that... Really, we didn't really expect to see signals that loud, so we made all these... We were thinking these very sensitive algorithms to listen to tiny signals and noise, like needle dropping, and then we get this really loud signal. Whatever. So it helps us because, of course, the next one was not so loud, and so that's the GW 151226, the Boxing Day event, and of course, people might have heard about 5 sigma. That's what scientists use to try to say that something is significant, and so you can see those are how the sigma moves up as you get louder and louder, and so the purple is kind of what we consider. Actually, this black one here is the background because of the fact that the first signal is actually in the data. The first signal, even if you timeshift it, it is coincident with, occasionally, with random events and the other detector, and because it's so loud, it actually looks like fake signals, so we have to remove that. And then, also in October, we got another event that's kind of interesting. If there were no gravitational wave signals, all of the orange boxes would just be on that line, and so anything that's off of that line, that's a little bit interesting, and the further away it is, it's much more interesting. So we have this another event. We have two sigma, so it's not very strong, but I'll let you guys be the judge. So here's some numbers. So it's kind of incredible to me that with this tiny amount of data, we can learn a lot about the system because we know general relativity very well. We can run these numerical simulations. We can sort of reconstruct what it looked like. So we get an estimate of how far away it is. 1.3 billion light-years, which is one-tenth of the distance to the edge of the observable universe, so it's really far away. This is one of my favorites here. The big black hole was 36.2. That symbol there, M with the circle with the dot, is the mass of the Sun. That's what we use frequently in astronomy to measure masses. It's called a solar mass. So this black hole was 36 times the mass of the Sun. The other one was 29 times the mass of the Sun, which, and the final black hole, was 62 times the mass of the Sun. Well, it doesn't take very fancy math to see that that doesn't add up, right? So what happened? There's three solar masses missing. That was the gravitational waves. So these two black holes colliding turned three entire Suns. They just, like, completely evaporated the energy and turned it into gravitational waves. That's pretty incredible. That's a lot of energy that went into bending the space. And then this one, this is just crazy here. So the luminosity, the Ergs is another measure of energy that we frequently use in astronomy. 3.6 times 10 to the 56 Ergs per second. That's how much, at the very peak, how much energy per time it was emitting. So the luminosity of the Sun is 10 to the 33. So that's 23 orders of magnitude bigger than the brightness of the Sun. Yeah. And then the entire universe, the luminosity of the entire universe, if you don't have any crazy events like this, just the ambient universe, is 10 to the 55 Ergs. So at the peak of this event, it was brighter than the entire rest of the universe, which is, it's very, these events are very, very energetic. All right, so that's the story of LIGO. We're currently, we took the instruments down again to try to improve them, because every time we improve, we go out in distance, but when we go out in distance, that's the volume is the cube. So little increase in distance makes a lot of extra volume, so we get a lot more potential events. So that's why we keep trying to improve the sensitivity. Once we get up to our design sensitivity, remember we're at that black curve, we're not at where we ultimately want to be yet, and I didn't explain why we're not there yet, but you can come ask me after if you'd like. But if we get to that ultimate design sensitivity, we'll see multiple events a week, we hope. So more big discoveries soon. We hope to see binary neutron stars colliding, so black holes are just these mathematical general relativity objects, but neutron stars have structure, and so if we see those smashing together, we can learn stuff, lots of interesting things. Those might produce electromagnetic waves that we could detect with telescopes, which is very interesting. And then we've got more detectors that will join our network soon. Virgo is a French-Italian collaboration that will hopefully be taking data soon, Kagura's a Japanese collaboration. We're hoping to build another LIGO detector in India, and then of course we're planning to make much bigger detectors with longer arms and space, all sorts of things like that. So, since this is a Python conference, I wanted to say a little bit how we use Python, which is basically everywhere. I mean, we love Python. In science in general, I think, but particularly in the gravitational wave community, it used to be all MATLAB for various reasons, but Python is becoming much more popular now. So, for data analysis, our flagship search, the search that looks for compact binary coalescence, so compact means very compact stars, black holes, neutron stars, binary is obviously two stars, and coalescing is that process where they spin together and merge. So, searches for those types of events, we use a pipeline called PyCBC, we have another pipeline that's based on Gstreamer, which is actually really cool, I like that pipeline a lot. And we use Python in that search as well to construct the Gstreamer pipelines. That's a streaming pipeline, so that one is actually used to detect events in real time as we take the data from the instrument. But this one is PyCBC, and we have this LIGO algorithm library, which is a C library, which has all of these algorithms for generating what the waveforms look like and doing the templating, the look for those signals in the data. That's called PyLow. We also have a package called GWPy, which is, you know, we use to retrieve data and do basic signal analysis and plotting. Of course, all of the plots you've seen, we're done with Matplotlib. I love Matplotlib. Who loves Matplotlib? All right, yay for Matplotlib. Simulations, we do lots of simulations, like the SXS collaboration that made those really cool animations. They use Python interface to pair a view. PyCat is an interface to this optical simulation thing that we use that simulates signals in the interferometer. Of course, like everybody else in the universe, we're trying to get into machine learning. We're a little bit behind the curve, but, you know, we're ramping up. Outreach, Ipython notebook. Has anybody here used Ipython notebook? That's really cool. We love Ipython notebook. It's awesome. And so we've been using that a lot, and it's been very useful in our outreach efforts. So I recommend people go to lask.ligo.org, L-O-S-C, the LIGO Open Science Center, and we have live Ipython notebooks where you can analyze the data yourself. It's the actual, basically, you can get the actual data that we measure, which is just those, you know, basically those wave files which you can listen to, and you can filter it, and you can extract the signal and plot it. It's very cool, all in the Ipython notebooks. All right, the very last thing I want to talk about is the instrument control. So I'm more on the instrument side. I like to help build the interferometers. And so one of the things that I helped put together was our automation system. So we have this big optical experiment. It takes a lot of coordination to, you know, control all of the mirrors and the seismic isolation systems to get to the place where the instrument is sensitive, most sensitive to detect the gravitational waves. And so I'll talk about that in a second. I'll just mention the one notable deficiency in Python where MATLAB still beats Python is in control system analysis. But hopefully that will change in the future. Is that telling me something? All right, so automation. So there's this, I'll just talk about this really briefly, because I think it's a really cool usage of Python. And it's not because I made it, I swear. So it's called Guardian, and it's the automation system for LIGO. It's a distributed hierarchy of state machine automatons. That's my fancy description of it. So this is kind of a schematic, very, very cartoonish schematic of the instrument. So here's the interferometer, you got the laser, you got the mirrors. Here you've got our digital, analog digital interface where these are just, you know, digital to analog converters that convert the electrical signals coming out of the instrument into digital signals. Here we have this real-time control system, which I won't talk about, but which is really cool. If anybody's interested in real-time signal analysis. And then up at the top is all of this automation system. So each one of these blue dots represents one of these little automatons that is automating part of the real-time system. So it's basically like flipping, you know, it's looking at signals and flipping switches and turning virtual knobs and stuff like that. So in each one of these little blue dots, we have a state machine that represents, you know, the automation logic. You know, basically what happens is we say, oh, we want to go to one of those states up there, and then this thing looks at this graph and says, okay, I'm going to go this path to get over there. And so this is all programmed in Python. So we have these Python modules that describe the instrument commands in these what we call state classes. And that's what this guard state is. So guard state is just a class that defines one of those bubbles in that graph over to the right. It has a couple of methods that get overloaded. So safe is the name of a new state. Damped is the name of a new state. It has two methods, a main and a run method. I won't get into the details of how those are executed, but basically they just have some commands in them. You know, flip this switch, set this gain to be something else, turn on these filters, stuff like that. Then we have edges, which is a list of tuples, which just represent connections between the ovals. So you can see these, they're basically all of these arrows in the graph over to the right. And so this edge would connect the safe state to the damp state. And so then that whole thing, those states and those edges, is this state, what we call the state graph. So what Guardian does is it takes the modules that describe these state graphs, it imports them because Python is so powerful and you can get into the import mechanism so easily, you can basically load the module, scan it for all of these guard state definitions, extract all the states, look for the edges definitions, put that all into the networkX module, which is a really sweet module for doing basically graph analysis, network analysis. If anybody has to do network analysis, I highly recommend this package. And then it builds these graph representations. And so then what happens is that that graph then gets loaded into the automaton process and that becomes like the brain of the automaton. And you say, hey, Guardian, go to state damped. And then Guardian looks at the graph and says, okay, I'm down here and I can just very easily figure out how to get to the state that I want to go to and it just starts going and starts executing every state. Once it's done with one state, it goes along the edge to the next one until it gets to the final place. And I think that the architecture of this is kind of cool because we use this as the multiprocessing library, which is awesome. I mean, Python rocks, just let me say right now. I don't know. The interfaces to this stuff is just so clean, it's so nice. So the main process uses the multiprocessing to spawn off a worker process and that worker process is the thing that actually does the execution of the user code, that sends the commands. And you do it with the multiprocessing instead of the threading because that allows the daemon to have full control over the worker. If the worker is stalker, is blocking for some stupid reason because scientists aren't very good at writing computer code, then it can just terminate it and respawn it and start over again. And we use the shared memory interface in the multiprocessing to exchange data between the two processes. So the commands to the worker go through the shared memory and then the status of the execution of the user code goes back up through the shared memory. The worker process can catch all of the user exceptions, report them back to the daemon, the daemon won't die, it'll just keep sitting there and it'll report to the people in the control room, hey, somebody fucked up their code again, and makes an error message. One thing that's kind of cool is that you can completely reload all the code on the fly. So you can just send a command to the daemon, it'll go and take all of the user code, that state graph, and it'll just take it. You can even reload, if it's in the middle of executing a method on a class, you can even reload that same class. So what it'll do is it'll just wait till the method is done executing. In the meantime, it's taking all the attributes of that class, stuffing it into the new version of the class, and then when that method is done, it just swaps it out and starts executing the new version of that method, the new version of that class with the new commands in it. Pretty cool. When we use this Epyx client server, this is something that's frequently used in physics, in large physics experiments, to do control of various, you know, control of big physics experiments. It's basically like a lightweight network message passing interface. So that's it. Just let me say, Python played a huge role in detection of gravitational waves in the analysis, in the control of the instruments, and on behalf of the entire field, I thank all of you. So thank you very much. Stay tuned for more exciting news from Rigo. Yes, thank you very much, James. I think it was not amazing like this view into one of the biggest discoveries ever in physics. Amazing. Amazing talk. Thank you very much, James. I think you have a gift, haven't you? Probably we can take a few questions. Thank you. Questions? Oh, yeah, I'm happy to take questions. Yeah, we probably have like five minutes for questions. Can you please come here? Please come to the front here. So I've read, is it true that when the first events came out, it was so huge that you thought it was a fake, some kind of elaborate prank. Are you asking what I personally thought? Because that's funny, because I thought exactly that. But I mean, many people in the collaboration, well, we of course behaved immediately like it was a real event. But I was very, I don't know, it's too crazy. It's too perfect, it's too loud. It's not what we expected to see as our first event. So it took some of us, it took me, you know, like a month to be convinced that it was real. But, you know, we did a lot of analysis. I mean, we made the detection in September and we did not announce until February. Yeah, exactly, they were real morse. We weren't just like sitting with our thumbs up our asses that whole time. We were analyzing the crap out of the data, trying to figure out, is this real? Are we completely convinced that this is a real gravitational wave? And then when we were ready and we had a really nice paper, then we decided to announce. You'd be the most editable cheat in the ever somebody's metal of crack. Yeah, that would be a lot, yeah. I mean, we thought a lot about that, like could it be malicious? Could somebody have, you know, somebody who's trying to get tenure, you know, snuck in. But it's too hard. I mean, this is one of the, I mean, this is not the reason we made two detectors. But we make two detectors far away so that it's really difficult for any sort of terrestrial signal to look the same in both detectors. Basically impossible. I have a question about the actual utility of those detections. Like, well, we know that you can now study some really radical events. But I've been reading about it and it's like, okay, sorry. Are those signals, are we capable of using those signals to see behind things that we cannot see actually? Well, I mean, so certainly now that we've made the detections, so this is why I titled the talk, the dawn of gravitational wave astronomy, because the point is not just to make the one detection to prove that the gravitational waves exist. That's very cool. But what we want to do is we want to see them routinely. That's why we continue to work on the detectors. We continue to try to make them better. So if we can see these events regularly, we will start to learn about what produced the events. I mean, you saw how much we learned just from this one event, right? We learned what are the masses of the black holes, how far away they are. Just those things alone are very important information to astronomers because they tell us, you know, the fact that we saw what appears to be three binary black hole mergers in our first observing run and not any neutron star mergers, which is something we thought we would see, that tells us a lot about what's going on in the universe. So astronomically speaking, there's a lot of information that we can get from each of these signals and from the ensemble of all of the signals that we detect. Hi, Python 2 or Python 3? Yeah, no, we're stuck. This is 2.7. 2.7. We're slow to upgrade. We try to keep, you know, we try to make things be, you know, we're trying to do science, so we're not trying to go to the cutting edge all the time. I mean, obviously, it would be, we will go to Python 3 eventually. I would like to go, but you know, our system administrators are more conservative than me. Do you have kind of a model or a measure how likely it would be to have this big event? How often would it repeat this big event? How likely that you just detect this one? The next one would be probably in a year or 10 years or something? So, I mean, we had two and a half events in four months. But do you have any models that the universe has this size and that about probably so many black holes? Sure, we had models and we had models, but they had very big error bars because we don't know, we've never seen gravitational waves, particularly for the black holes. The error bars for the black holes were there could be thousands a year or there could be zero. So, they're like really huge error bars. We had no idea. And then we see two and a half in four months, so that's very promising. We will definitely see more as we continue to observe. So, the first detection was 1.3 billion light years away, three or 30 times the luminosity of the universe. What was the minimum safe distance for that? That's a good question. I think that that has been right. I don't have the answer to that off the top of my head, but I think that it's something like if you were within, you know, less than a thousand kilometers from the event, you would actually feel the gravitational wave in your body. So, you know, but remember, these gravitational waves are tiny. It's just like tiny fraction of a proton size that's moving the instrument. So, it's obviously impossible for humans to feel that unless you get really close to the black hole. But then at that point, I don't think I would want to be there. I mean, it would be really cool, but it would be the last thing you see, probably. That's a big relief. Thank you. There's no life left in that galaxy. I wouldn't worry too much about it. I wouldn't worry too much. Okay, so first congratulations. It's really great to see finally some detection, confirmed detection in physics. So, my question is how is the mass of the black hole model dependent? In a way, you have to assume, I guess, general relativity. You have to assume some lack of physics. You have to assume some merging of black holes. I mean, all things that we don't really know that you kind of deduce all together through one observation. So, how can this change? So, what we deduce from the observation comes purely from general relativity. And black holes are actually very simple objects in some sense, because they're just purely mathematical. You know, general relativity predicts that there would be these black holes if you have this very extreme curvature. And those, you know, they don't have many properties. They have a mass. They could have a charge. They, almost in the real universe, would not have a charge. And they have a spin. They could be spinning. And these black holes were actually, we actually measured the spin of them as well. So, there's really only these three parameters. I mean, there's the inclination of the spin relative to us. So, we can use our knowledge of general relativity and these sort of simple black holes to make predictions of the signals. And then, that is what we, when we look at the data, we sort of reverse engineer what the signal looks like to extract what the parameters are. Okay, so you don't have to assume us a model for the evolution of the universe, for example. No, we don't. We don't. Well, we do, we like to do a Bayesian analysis. So, we have what we call priors, where we make some assumption about what the distribution is. But we like to keep those assumptions to be very broad. So, we don't want to bias our search. And so, we like to assume that, you know, for the placement of the black holes in the universe, we just assume that they're basically isotropic. Because we don't know. We don't know, we don't have, we don't yet have any reason to believe that they should preferentially be at some point. But the second black hole, the Boxing Day event, was also at a very similar distance. Basically, 1.3 billion light years. So, that's very interesting. We have two detections, and they're both from basically the same distance away. They're also kind of similar masses. They're, you know, 10, 20 times, 30 times the mass of the sun. So, you know, this is what we're learning from these detections. Okay, we have, like, time for three more short questions. So, Danielle? Very short, sir. Fabian and Fabio. Has your adoption of open source software also helped engender attitudes to opening the data and the output of your science? Oh, yeah. I would, I would, I would say, I would flip it, basically, and say that scientists are, by their nature, open. I mean, we, we, it's a little bit tricky because in academia, you know, you have to, you have to, people get a little bit protective because they need their results so that they can get publications, so they can get tenure. But, you know, we, we also want to share. And so, the, the, I think that, you know, we open source all of our software. It's not just that we use open source software, but all the software that we write is also open source. So, all of the, all of the, the Pi CBC, Pi Laul, all of our algorithms, those are on GitHub, this automation platform, that's freely available, you know, other scientific collaborations can use them if they want. So, we're, we're firmly, firmly invested in open source. And obviously, we get so much benefit out of it. We, you know, I mean, if we, that's one thing that sucks about MATLAB, right? It's proprietary. We have to pay thousands of dollars, many, many thousands of dollars a year for licenses. They've changed the APIs for no, apparently no reason just because they think they should or make people want to buy a new version. They make two releases a year and, you know, you have to pay more. I mean, it's really annoying. It's really annoying to work with MATLAB, honestly. And so, Python is just been huge because of the open source enables us to be more flexible. Okay. Very quickly, the things that you think are really missing or the main pain points on the Python ecosystem or specifically on the Pi data, you know, ecosystem that you could use and help. What can we do to improve? Oh. I don't know. Just keep doing what you're doing. I don't have anything in particular. I mean, there's certainly, like I mentioned, the controls analysis stuff. I mean, there's lots of cool signal analysis stuff in PsiPi. We would like, you know, to see more expansion of the capabilities there. You know, we do a lot of very high performance computing stuff. We have, you know, big computer clusters where we do massively parallel analysis of the data. I don't know. I mean, it's, I mean, that's one of the things that's so great about Python is whenever, you know, I want to do something and I do a search, I can almost always find a quick, easy way to do it. So parallelism or... Yeah. The parallelism is a little bit, you know, I mean, it's kind of specialized because we need to parallelize across computers in a cluster. So it's not the same, you know, I mean, obviously there are packages that help do that. We use this platform called Condor. I don't know if people have heard about that, but it's a job scheduling system for running analysis jobs on computer clusters. We don't, our cluster analysis is not terribly sophisticated. It's not like these big numerical simulations where you have to do a lot of sharing of memory between processes spread across many nodes. We basically just, you know, give each computer node and each process running on each computer a little chunk of the data and say, you look for a gravitational wave here, you look for a gravitational wave here, and we just, you know, massively parallel. So it's pretty straightforward what we have to do. The reconstruction of the information about the event, that's more complicated, right? Because we have, you know, this one small stretch of data where we know that there's an event and we want to extract all the information from it. So we have to run a lot of models over it and, you know, do these integrals over all of these, you know, all of these different tests. So that's a little bit trickier. Okay, last question. Okay, I have a question about the visualization of the event that you had there. It looked like in your model that there are two black holes which are rotating around each other about 30 times in five seconds. Is that something that actually happens? Those are slowed down, right? So they're actually, it's actually faster than, those are slower than real time. Yeah, they're very fast. It's incredible. I mean, so, I mean, one of the plots that I showed actually has, let me just go really quick. Sorry to show one thing. Okay, look over here. This is the speed of the black hole, the velocity of the black hole over the speed of light. So at the very end, the black holes, I mean, these are macroscopic, I mean, they're, you know, they're big. They're 30 solar mass things is moving at half the speed of light. I mean, it is, it is, it's nuts. I mean, what's going on in that region of space is, whoo, that's crazy. Okay, whoo. Thank you very much, James. Thank you. Thank you. Thank you. Thank you. Thank you.
Jameson Rollins - LIGO: The Dawn of Gravitational Wave Astronomy Scientists have been searching for the elusive gravitational wave for more than half a century. Hear how they finally found them, and the role that Python played in the discovery. ----- Scientists have been searching for the elusive gravitational wave for more than half a century. On September 14, 2015, the Laser Interferometer Gravitational-wave Observatory (LIGO) finally observed the gravitational wave signature from the merger of two black holes. This detection marks the dawn of a new age of _gravitational wave astronomy_, where we routinely hear the sounds emanating from deep within the most energetic events in the Universe. This talk will cover the events leading up to one of the most important discoveries of the last century, and the myriad of ways in which Python enabled the effort.
10.5446/21161 (DOI)
Good afternoon, everyone. It's my pleasure to introduce Javier Arias, who's a senior developer at Telefonica in Barcelona here in Spain. And he's going to be talking about machine learning for the means using Python, of course. Do you hear me well? They're in the back. Yeah, it's okay. If not, please let me know because if I lower my voice, let me check back of the contest of the photos because full is Also for my mama. Okay. Hmm. Okay, this could be a day in your life. This could be today. It's about time to leave the office and your phone tells you the best route to go home. It's fine because you never told your phone where home is or what's the appropriate time to leave the office. In fact, on Fridays, the phone will tell you the best route to your parents home because on Fridays you visit your parents. So you go to a parking and you happen to be a happy test launder. Not that everybody of us can beat but let's imagine for a minute in the in the last version of the firmware scar. There is an autopilot feature with autopilot. The car is able to keep the lane by moving the steering wheel, keep the speed and the distance with the rest of the cars. And not only that, the car is able to learn not only from its own experiences, but from experiences from the Tesla from around the world. So you get home and you want to play some music. But you don't feel like choosing any music so you trust and Spotify to play some music for you that you don't know but you actually know, like. When listening to the music, you check your photos which are very well organized in categories like here with architecture and arts. But you never talk them. It was Flickr that is able to look inside your photos and see what's inside and talk them accordingly. These things are happening daily to thousands or even millions of people from around the world and all of them have something in common. Do you know what it is? It's machine learning. Machine learning is already in your life. Is everyone around us? And it will be very important in the next years. So what's this presentation about? I try to explain a little bit why machine learning is so important for us as users, as engineers and for companies and for the rest of the world. Also I try to explain about my journey with machine learning. My name is Javi Dadias and I am a backend engineer. Six months ago I didn't know anything about machine learning and I'm not an expert. It would be need years of study and practice to do so. But you can get started pretty quick and do very interesting and funny things. Because there are many technologies and many resources around that are free, that are open source and of course using Python and libraries with Python. And also I try to explain some very basic machine learning concepts. And we'll see a couple of code samples. So as many of you know, machine learning will be very important during the next years. But one of the first questions that came to my mind when I started with it is, is really machine learning intelligent? Are these algorithms already better than us? And the response is that in some concrete questions, in some concrete aspects, machines and algorithms are already better than us. One example of that is image recognition. ImageNet is hosting a yearly challenge on machine learning, on image recognition. And the performance of the winner algorithms have been improving a lot during the last years. Thanks to the adoption of deep learning and this kind of technologies. In 2015, the winner algorithm performed better than humans at recognizing things inside images. Something similar happened to chess about 20 years ago when IBM's Deep Blue defeated the world champion, Gadikas Parov. I remember vividly reading about this in the news and reading it on the newspaper. It was 20 years ago maybe. That's why some colleagues tell me that I'm a senior developer. But it was quite a milestone at the time. But there are other games that are much more complex than chess. For example, the Game of Go. The Game of Go is so complex that the number of movements is bigger than the number of atoms in the universe. For such a complex game, it's very important to play with intuition. And for that reason, many people thought that algorithms could never win the champions. Until this spring, when AlphaGo by Google defeated the world champion, Lysidor. These things, algorithms being better than us in concrete tasks, are happening more and more frequently. And for that reason, many people are making apocalyptic predictions on the moment that machines will be more intelligent than us. It's the singularity point. But, okay, I don't want to be so apocalyptic. I want to talk to you about my journey. As I said, six months ago I didn't know anything about machine learning. And I thought that it would take weeks or months of study just to get started. But that's not true. If you are a dummy like me, you don't know anything about machine learning, you can get started and in a little bit of time, you can get very useful and very surprising things, at least for me. When I started, I decided to study, not by using courses or books, but by using massive open online courses. And these courses have videos and exercises and there are different providers. There are forums where you can discuss with other people about the things you are doing. And there are many different providers, so you have to do your research here. What I chose was the Udacity Introduction to Machine Learning, but it could be any other one, which would fit for you. It's very well organized and is using Python and Scalearn, and this was a plus for me. And also for me it was very important that you can do it at your own pace, because I can't stick to deadlines. So this was a plus for me. And one thing that caught my attention was the subtitle, which is, Pattern Recognition for Fun and Profit. I can imagine about the profit, but what about the fun? And the course starts like this, a couple of friends sharing a bottle of wine, not bad. And it ends like this, with same friends and more wine. So it seems that machine learning can also be fun after all. So what is machine learning? I want to explain very basic concepts and some insights that many times are not well explained or just overlooked when you start with machine learning. I think that there are things that are important, at least for me, and I want to share them with you. Of course, this is not a short course, and you should learn, read books, do courses and these things. But when we work in machine learning, we want to solve a complex problem. So we have something here in the middle, and you show, you have some inputs, which is the data, machine learning is features, and we have to do a prediction, which is the outcome of this thing here in the middle. The first approach, the classical approach is programming. Probably all of us here know how to program, so you have to understand your problem and you have to explain it step by step, in baby steps to the computer, so that it can follow your steps to achieve the solution. But the problem is that for very complex domains, such as image recognition, medicine, these kind of things, this doesn't scale, because let's imagine that we have to solve something for medicine. Maybe you have to code thousands and thousands of rules, and these rules are not exact, because medicine is not an exact science, so maybe you have conflicts and you have to solve them, and it's very difficult to solve. But if you use machine learning, the approach is totally different. Instead of explaining the computer, what are the steps to solve the problem, you show the computer some real-world data, some examples of data, and you let the algorithm learn and take its own conclusions from the data. This has huge implications, and it's that we can teach computers to do things that we don't know how to do, and I'll show you that in some minutes. So, I'll try to explain from a user point of view, I'm not going to enter into mathematical details, because I studied mathematics something like 23 years ago, and I couldn't remember, but I'll try to explain from the point of view of users how machine learning works at a high level. And we will solve an example for character recognition. So, we will have thousands of images like this, each one is containing a character, a letter, and we will have the labels telling us that the first letter is an F, the second an E, and so on. So, the first step in machine learning is getting our data. We have that blue image that corresponds to an F, the yellow image, which is a G, and so on. The second step is to choose an algorithm. Lorena in the talk before this was talking about naive base, but there are many, many tens or maybe hundreds of algorithms out there. There is support vector machines or came in decision trees, neural networks, many of them, and you have to choose. And there are different algorithms that can be a good fit for your project. And also, these algorithms can be configured. So, you have to choose a combination of algorithms and configuration. So, this is part of the machine learning. And then you train your algorithm. For training the algorithm, you start by showing some images and you say, okay, this blue image is an F, and this yellow image is a G, and so on. In some respect, it's like showing a baby to read. It's like teaching a baby to read. The first step is to get predictions. So, you show new data to the algorithm and it will tell you, okay, I think this is a D. But the tricky question here is, is that letter really a D? Is there algorithm predicting correctly the letters inside? We will answer this question later. So, we have a lot of tools and we have different languages such as MATLAB and R, but here we want to talk about Python, of course. And I don't have to show here the beauty of Python, but it's a very good fit and it's following its own philosophy of batteries included. There are many, many libraries with very good quality that will help you to solve problems very easily. And we will solve the character recognition problem using Python. And SkaLearn is one of the most popular libraries out there. It's open source with Python and the documentation is wonderful, not just for the library, but just as a reference of algorithms and methodologies and everything. And SkaLearn gives support to the full life cycle of machine learning. So, we will solve this problem. We will do the following steps. We will get the features with labels. We already have the data. We will choose and configure an algorithm. We will be using logistic regression with no configuration. We will train the algorithm. Then we will do predictions and then we will validate them. So, let's get started with the example. We have this big data set here, it's thousands of images with the labels. We want to separate them in two different data sets. One data set is for training, the other is for testing, for validating our results. And we will use train test split function from SkaLearn. And we will split our data set and labels. We will give a size for our training data set, size matters, but we will not explain why here. And there is a random state that happens to be a constant. And this showed me in the very beginning. Why the hell are we giving a random state that happens to be a constant? And the answer is that the train test split will separate our big data set in a random set of images for the training and the test. But by passing this constant, the selection of images for each data set will be the same always. So that we can compare different configurations of the algorithms or different algorithms. Then we initialize our algorithm, logistic regression, no configuration, and we do the fit. The fit is the training, and we pass the training data set and the labels. This is the part where we teach the baby to read. But it's very easy, we don't have to iterate and do things. SkaLearn does it for us. To do predictions, we just call the predict function on our classifier, and that's all. But the question is if our predictions were good or bad. To solve this question, we will use accuracy. There are many different ways in machine learning, but we will use accuracy. It's the simplest one, and we don't have time for more. So what we do is do predictions on our test data set, and we have some test predictions. What we do is compare the test labels, which is the ground truth, the things we know are truth, to the test predictions. And this way we know the percentage of images that we have been predicting correctly. With this example I just gave you, we got 89% of accuracy. This means that of 500, 5000 images were predicting well 90% almost. In just five lines of code. And this is what I mentioned before. I never did image recognition or image processing. It would take weeks or months to implement such a thing without machine learning. And we did it here in 20 minutes together. Of course we can improve the results. We can play with training data, as Lorena from the previous talk mentioned before. We can change the algorithm and the configurations. And this is very easy using SQL learn because the APIs for the different algorithms is almost the same. And you can give SQL learn different configurations for one algorithm. And it will test all the different configurations and give you the best one. So we already seen this very simple example with classical machine learning. Currently I'm doing another course at the Udacity of Deep Learning. I just did the very first lesson and I did the first exercise of couple of exercises. And I want to present them to you because I think that it's part of code that you more or less can understand. I'm not going to explain neural networks and please don't ask it because I don't know anything about it. But I think that you can get an intuition on how we are going to structure the code. So this is the course I'm still doing. So I'm going to run. We will use TensorFlow, which is a library by Google, and who is the author of the course as well. And it seems that deep learning is not so fun because there is no wine in the course for the moment. But I'm still in the first two lessons. I will keep my hope. So we will be using TensorFlow. I'll jump directly to the code. The difference between SQL and TensorFlow is that SQL is using an imperative API. And in TensorFlow what you do is describe a set of mathematical operations in a kind of graph. And then you execute that graph with your data. So you have to know what are the mathematical operations to implement your neural network. So please don't make questions about this. But this is the simplest possible neural network in the world. This is two layers, neural network. And here in the left we have an image, X, and there on the right, you're right. We have a prediction. The prediction is a matrix with a set of probabilities for each letter. So we will pick the letter with more probability. And the implementation is very easy. The image is a matrix, and we will do the matrix, multiply by a matrix of weights, and we will add another matrix, which is the bias, and we will apply a ReLU function, which is a kind of filter. This is what I understood from the first lesson. And you have them, you are making a pipe. The result from each one is going to be piped to the next. Then we go to the second layer, which is, again, a matrix multiplication, an addition of a bias, and we will get a matrix of weights, and then we will do a submax, which is a function that transforms a set of weights in a set of probabilities. So the implementation is very simple. It's, if you remember, what you have, remember, please, what we do is we do a multiplication of matrices of our test dataset with the first matrix of weights for the first layer, and we add the biases. This is layer one logits, and then we apply the ReLU on the layer one logits, and we will have the output for the first layer. Then we repeat, we do the matrix multiplication, and then the addition, and then, we apply the submax over the output of the previous operation, and this is the implementation of a simple neural network. This is the production neural network. You have to train it so that you can have these different matrices with correct values for your problem, and it's much more complex, and maybe next year I will explain that. So this is what I wanted to share with you. The idea is that I'll do a very quick summary. Machine learning is here, it's already in our lives, and it will become very important for everybody. So if you are a dummy like me in machine learning, don't be afraid. You can do very funny things with the resources and tools that we have there outside, with very good quality, free open source to use, and you can do very funny things. So if you want to use them and do interesting things, just do it for your own profit, and of course for fun. Thank you. Thank you. Thank you. Compared to regression analysis, neural networks are known to be, it'll take a lot of time, whether your experience is using part, or it's not as fast as C++. Thank you for the question. So in my experience, the question is that, as I understand the question, and correct me if I'm wrong, the question is that, how compares the performance from the first example and from the second, and also how compares that with the performance with all the languages, such as C++. The response is very easy for me because I'm not sure what to answer, but in my experience, the performance of the, at least for these neural networks, is more than enough, and there have been many advances in last year, with the adoption of graphic GPUs and different algorithms, for example the ReLU function we are using here instead of sinusoidal functions, have made possible to frame very complex networks. In any case, I don't know how it compares with C++ or others, but what is sure that if you want to train very complex models, you will need very, very expensive hardware, but you can play with it with just a small laptop in my experience. Thank you. I also have a small question. There are a lot of tools with different level of support, for example, which provide basically API for machine learning, like API.ai, Watson from IBM, something from Google. My question is, have you used them, and is it worth to play around with them? For example, for text recognition, they do a lot of interesting stuff, as well as they claim to. Or is it better to just develop something on your own? Thank you. I don't have a lot of experience with that system, some weeks ago I did a small prototype of a chatbot using natural language processing libraries from Microsoft, in that case. My experience is that these models are already very well trained for very specific purposes. If you have the time and money, if you're core business, probably it's better for you to train your own models and have people specialized. If it's a some side thing, it's not main core business, you don't have the money, probably these already built models are more than good enough for you, but for very specific things, you will don't have already trained things. But it's my experience. Thank you. Questions? Great. So at 5.15, we've got some lightning talks. Otherwise, tomorrow at 9 a.m., there will be a fantastic keynote by Paul Inder in the brand, who will be talking about the use of Python while Disney, and I'm told it's a great talk, so I encourage you all to come. Let's thank the speakers again. Thank you.
Javier Arias Losada - Machine Learning for dummies with Python Machine Learning is the next big thing. If you are a dummy in terms of Machine Learning, but want to get started with it... there are options. Still, thanks to the Web, Python and OpenSource libraries, we can overcome this situation and do some interesting stuff with Machine Learning. ----- Have you heard that Machine Learning is the next big thing? Are you a dummy in terms of Machine Learning, and think that is a topic for mathematicians with black-magic skills? If your response to both questions is 'Yes', we are in the same position. Still, thanks to the Web, Python and OpenSource libraries, we can overcome this situation and do some interesting stuff with Machine Learning.
10.5446/21162 (DOI)
Okay, so let's welcome Hakim Berengar with a talk about Internet of Things. Yeah. Thank you very much. Good afternoon. I hope you like the presentation. We are going to talk, as he said, from Internet of Things that gives you a better idea of what's going on there. Okay. We are in front of some big business in front of us. As you could see here, more, by 2020, more than 25 million apps is going to be there. Well, it's difficult to have in four years so many applications, but okay, that's what they said. But in case that's not true, what is going to be true is that the revenue opportunity there is huge. And the decimage of data that are going to be managed is going to be huge as well. As you could see here, the reports that all these sensors are going to send to be stored will be huge because the number of sensors around are going to be also huge. So new analytic engines are going to be needed because to reflect the good feedback to the control, again, sensors send right information to the data source. In order to go ahead with this, I have implemented something regarding a server that manages those kind of devices. We'll see the environment, the architecture, the server functionality, some security that is needed. What type of IoT devices could be used here? What kind of access from mobile and desktop devices could be sent and what are events, alarms, sensors, and actuators and what I think about the future about this? Environment is Python 351 for the devices that are accessing the server is, you know, there is 3.4, but the server actually is based on threading and queues. They share nothing. They communicate each other using queues. The database being used is MySQL and MySQL connector comes from Oracle and some personal leaves that are going to be used as well for MySQL and for managing USB or general functions. They actually, everything is running in Ubuntu 16. This is the architecture that is actually up and running. When any of these devices in this part are powered on, what they need to do is to register themselves into the server. So they send a message to the server, well, here I am. My name is that or whatever. If this is okay, if this is a right device registered in MySQL database, then the day I get the server adapts a new thread for this device. If not, the thread is terminated and it is rejected, the communication. So is what any of these devices, it doesn't matter where they are using Wi-Fi, is what they need to do. We will see later what kind of devices are those. On the other hand, Kiwi as is in the text that you have already in the schedule or Flask application, which is better, are using Socket.io. That means we are going to, because we need a persistent connection, we are going to use Socket.io WebSockets and any web browser that supports WebSockets is going to be valid. So any of these devices are going to connect also to the MySQL database using a password. When that's done, they are ready to send messages to any of these devices below. In order to have an idea of what's going on, we have here a small application that supports that. We have a small that. And then we have here, online, we have extracted what kind of devices are connected to the server. We could use this one. We could know all these kind of functions that really are now a possibility to have in those devices. We could say, okay, we want to know temperature, pressure, humidity on that device. We send to the server and give us what kind of measurement these devices is having it. So this is the way that's happening. So we have connected using your user and password. We have a persistent connection that will be valid for the rest of the presentation and all these devices are ready to answer any message that we are going to send. As I have said, the frontend is a Flask, Flask.io. The messages and from modules are executed using this Flask.io 1.4. To send messages from the devices to the application, to the Flask application, we are using Flask.io client in Python. As we see here. And how is this going? This is the part of the web browser. We define a number of messages for connection, for example. And this JavaScript is sent and at the end of the day, what we are going to execute is a function in Python that receives the message, which is the message containing origin, destination and what type of function we want to execute. And once it's in the environment of Python, okay, we execute everything that we want. Just defining naming of the message and the name space. And by the server, what we have is that every time a server is waiting for connections from the IoT device, once we connect it, what we start is a thread socket, a thread queue, and those are blocked waiting for the message. They are doing nothing. In the future, I think I think I could be better by the moment is running on threads, taking into account that those threads are corresponding to one customer and every customer will have their own server. So this could be enough, but I think I will try to make it happen as well. So the parameters to define a device at the end of the day is we need to know what kind of device, the idea of the device, the serial number, the server name, and the server port. These five that are stored in each of the IoT devices, the first thing they are going to do is using server name and port, connect to the port of that server, and the server using type of device, ID, and serial number will know if there is a good IoT device to be connected. Three types of devices are happening here. We are going to have MkR1000, we see later, that we'll use flash memory or EMCC with black bone or SD card with raspberry. The serial number by definition is never sent through the network in order to have more security. We will see later how this is going to happen. In different servers, the only thing that we have is connect, change the port, and we will have another server useful for other devices that are going to be connected. Actually the server name is probably using disnoic pay. The message format always has three fields of mandatory, which is origin, destination, a message type, as we have seen here in the application. The origin is this web browser, destination is this connected module, and the type of message that we want to send is this one. The rest of the fields depends on the message. We will have parameters on there that will have variable length and we'll finish with IoT. IoT is using SHA256. We are using Haslip SHA256 in Python and in C implementation for MkR1000 because it needs to be in C, we are using AdMail documentation and there is SHA256. Devices are grouped by customer. Again, IoT devices from one customer could talk with those around. Inside the server, what we have is an origin thread that is sending to the key or the destination. From the key or the destination goes to the device. The device executes the function and comes back with the results. Who goes to the origin key and from there to the destination. We are sending this from this origin to the destination. We want to have, well, we could read the gyroscope of the device and then it's accumulating here what's going on. But the system is always the same. Using Qs and threads and sending the message. We are going to define two types of devices. Low consumption and we have here, for example, device. This is the size of this device. It's nearly nothing. This is the battery that we could attach to this. With this, we could have it on your device and with that, you will have the possibility to do anything that you want because you could connect this device to your phone as an access point and from there goes the information to anywhere. We have here Wi-Fi and we have, oh, okay. Right. So with this, we have low consumption. Two, three weeks we could have with this. We have a lipo. A lipo so. The Wi-Fi range that these devices have is more or less the same as a laptop. If one laptop arrives to 100 meters, this could be the same or more. That's, you have here the calculations if you like to have more information. The Wi-Fi most, the normal thing is this is a client, a station, but also could be used to have a range extension. These devices could be used as well as an access point and that means that we could connect one with the other and extend the range because we could resend the message to the following one. These are the advanced modules that have been used. So all of them are Linux embedded. One of them are Debian and these we are, we're using here also a Python. Raspberrybee is about 45. We could use OpenCV. In this case it's more because the information or the commendation is not good enough and the contract you could have with Raspberrybee is not all limited permissions you have there. Media and education is where we could apply. This is good for, also has an Ubuntu inside, Ubuntu embedded board. We use 3.4 Python and it's good for the ones who have already things done with Arduino. For me the favor is this one, Big O'Bong Green Wireless. It's about 44. There is a lot of documentation, Texas Instruments CPU with Debian or Ubuntu is the one you want and this is good for industrial environments and also commercial products because you have any kind of protocols, communication. It's this one here. The mobile and desktop devices, the only thing that is needed is that a web browser that supports WebSocket because we need, as I have said, a persistent connection. We have that using Flask and Shokita Yo. That's alarms and events. We need alarms from the devices in order to send us directly messages if a temperature is above or below a number. But also we could schedule events that in our case will appear in the text area here below. We use this event and we send these messages what usually should happen is that the device is sending us events that will appear here downstairs in this area. So anything, any several seconds in this demo are going to be in this text area downstairs but also are registered in the database. We are producing information from every one of the devices that we have in front of us and they are producing information that we are going to analyze or treat in any manner. If we send the same event again, we stop sending us more events. That's the way how we generate alarms and events. Actuators, we can see here any kind of sensors, it depends on you, which ones because it's above any kind of application and sensors could be used. In the case of actuators, what we name an actuator is everything that we have arrived from the device to the environment. As a matter of example, we could again go to the application and use, well I have here also this camera that is, this is a Raspberry UC is about five or six centimeters by six centimeters and we have here 64 let's say LEDs or lamps or whatever because the only thing we want to do is drive there. We could write pixel, we say okay, we want to write pixel number, I know, 26 and we send it and here we have the color. We could say okay, blue or whatever. When we have sent this pixel, pixel number, whatever we have sent is on, we could send letters or we could send messages as well. Any kind of possibilities are on top of what, green or... This is what we could do with writing and any kind of message that writes goes in that direction. In the future, again, software is the most important, either C or Python extension of Python using C and many applications are going to develop around this. Products like Civili or from Login, ThinkSpeak or from Microsoft are good examples of what's going on in this area. Hardware will continue to integrate more parts of the application, decreasing price and size. Wearables are there and we have seen them. The industry will take advantage of substituting the actual solutions that are in the market. That's all. Thank you. Does anyone have any questions? If nobody has a question, then let's thank you again. Thank you and goodbye.
Joaquin Berenguer - Server for IoT devices and Mobile devices using Wifi Network, The server is developed in Python 3.4, using MySQL5.6 The mobile device application is developed using Kivy. The application in the IoT device is developed in C. The IoT device is a hardware device using ATSAMD21 from Atmel, and wifi is made using ESP8266. The security used is sha256, standard in Python. And the IoT device using the crypto device ATECC508A, that generate also sha256. ----- The server is developed in Python 3.4, the information is stored in a MySQL 5.6 database. All IoT devices, Mobile Devices and Windows or Linux Desktop are registered in the database. All type of messages that are understood by every type of device, is also registered. A map between which device could access which device is also stored in the database. With this info, any mobile registered could send a message to a device. The message arrives to the server that resend the message to the IoT device, receive the answer and resend to the Mobile device. The Mobile device and the IoT device, could be anywhere, as the server is public, have the registration of every device connected. The mobile device application is developed using Kivy. The application in the IoT device is developed in C. The IoT device is a hardware device using ATSAMD21 from Atmel, and wifi is made using ESP8266. The security used is sha256, standard in Python. And the IoT device using the crypto device ATECC508A, that generate also sha256. The server start a thread for every device connected, the communication between thread is made using queues. During the presentation, the server is going to be presented, and IoT device is shown, no demo is going to be made. A library to manage the database, is used for easy access to the database, and have database independence, also will be shown. Prerequites: Python 3.4, sha256, threading, queue, mysql.connector, relational database.
10.5446/21164 (DOI)
make me itself. Before we kick off though, I want to thank my employer, Binder, for making it able for me to be here and talk about this to you guys. Yeah, so, shared nothing. Those are two words. What is it? In essence, shared nothing means nothing more than sharing no resources. So, all you do is make sure that everything that you run runs on its own CPU, memory or disks. You know, any program that you have does that and that alone. It became a very large thing back in the day. I think the first written thing about it is 1983, which is older than I am. But the good thing about it is that you can make separate units of programs. So, you can separate your concerns in a way that anything you write does its own thing on its own hardware and doesn't do anything else, which is super useful. Also, makes you able to scale things independently. So, for example, if you run a website, you can deploy your own code in some way. If you get a lot of requests on, for example, just your HTML pages, you can scale those servers up and leave the rest as it is. Or if you do a lot of processing on the back end, you can scale that up and leave the front end service at where they are. Also, it's very resilient against outages. A good example for this is, for example, Netflix, who made their own Chaos Monkey thing. Who else ever use Chaos Monkey? Okay. So, if you've seen it, you are probably well aware of how it can kill your entire network and your entire application, but not if you set it up correctly. And last but not least is loosely coupled. So, every little thing can die on its own and then it can be restarted and something will take the features over as it is. More interesting though, shared nothing doesn't mean you share nothing. Lots of things still, especially in the web sphere. If you have a web application, you will be sharing state. People will be locked in, will be authenticated, will be in the state of the process. Workflows in your application will be shared all over the entire thing. And people don't generally do that in one request. So, every step you take, you got to go on further and further. And additionally to that, because you have all these separate units of things, everything depends on each other in some way. So, why would you do shared nothing? Easiest reason is, as I said, full tolerance. Any time your application has a problem, it can deal with it itself. Your entire architecture is built to make sure that that thing can get back to where it was or get back to a state where it will actually work. It's scalable. As I said, you can scale independently. And it leads to much more smaller and simpler applications. Very good example for this is that you can build an application that just does what it does. Welcome. But the best thing about smaller applications, as you're probably well aware of, is you can test them better. You can manage them better. You can review the code better. You can do anything you want with the code that you have. And you can explain it better to other people. So, also, if you transfer the project or hire new people to work on it, it's just much easier to get to a certain point where it's useful to use. It does have some considerations. You're going to be simplifying your code base. You're going to make smaller, simpler applications. But if you oversimplify, you're going to make a problem larger instead of smaller. If you do crazy things like have an entire Python application, you make every function a salary task. Everything is going to be slow as balls because it's only communicating over the network with your back end and hoping to get an answer at some point. And then all your servers are doing nothing else but communicating to each other. So, that's not a good thing. Also, you need to know when you need to start. If you're just making a web log, like any WordPress website or anything like that, if you're going to separate those things, you're wasting a lot of time. And you're not making anything more useful than it really is. Other than that, you also require a lot of infrastructure. Instead of just having one server that does what it is that you do, you're going to need a whole bunch of servers. You need something to manage your traffic. You're going to need something to relay messages. You're going to need something to do all the functions that you have. And as I said before, because there are simple units, simple smaller units, you're going to have, yeah, just a whole bunch of things that do their own thing. You cannot share the server of any other feature that you are building. So, your login page server, for example, cannot certainly also do the authentication if you do it in the wrong way. And apart from that, you're still sharing states, so you still require a source of truth. I know that databases are one of the origins of shared nothing, but it's also, it's a key as you will be storing stuff in there and you'll not be able to scale that in the same way as you can do with your application. This goes for your databases, but also for your caching layers, for example. So, where would you be using shared nothing? I know two very good examples of a shared nothing website. One is Google. I'm pretty sure you all know what Google is. If you don't, please raise your hand. Or Binder, which is very work, obviously. Yeah, it means that skill. It just means that we do a lot of things. It's everywhere. It's all over the world. It's all working in cooperation with each other and, as I said, it's very scalable. Obviously, data warehousing is also an option by virtue of sharding your data. I'm going to skip that one, but I wanted to list it because this is where it all came from. So, what are you going to need to build it? It's an empty slide because I'm sharing nothing. Anyway, first you're going to ingest the traffic. You're going to get traffic from your clients, your users. They need to get to your application. So, it's basically your low balancer or any web server that you have. I want to make it more tangible, so we got an example for this. We use engine x. When this gets the message, it just passes it on to the application server, which we call the frontend application. This does nothing more than get your request, hacks it into pieces, and determines for itself what it wants to do and how it needs to do that. So, because I'm branding things anyway, we use Pyramid for this. Pyramid is pretty awesome, but you could also use PHP or Node.js if you're so inclined or anything in Go if you're into those kind of talks today. What it does, though, it makes new tasks, and those tasks need to be executed, and the only way to get that done is to insert a message queue into the stack. So, there's a message queue. It does nothing more than get the message and give it to somebody else. So, for example, we use salary for this. It's not necessarily just salary. We run it on Redis, for example, but you could also be deploying it on RabbitMQ or any other things. I heard somebody talk about OpenRQ recently. Maybe that's one, too. I've injected out yet. And then in the end, you end up with a whole bunch of servers on the rear of your application stack and it's the backend processing. This can be literally anything that can read messages from your message queue. We use Python things. We use Go. We use Java. Anything that you cannot read. So, this is a bit abstract, so I want to give you some examples on how we do things. Here is a login page. This is our application. It's fantastically beautiful. And you can log in. So, you don't need a log to generate this, so this is a very simple thing. And the stack will basically look a bit like this. You only go to ingest traffic, you go to the front end application, that decides, okay, I'm going to render this template and give it back. Nothing else is needed. Obviously, there is some backend stuff going on like storing your session key in the database or in the Redis thing and making sure all those things are right. But, you know, I'm just trying to keep it simple. I also have a less simple example. Literally. I uploaded a picture of my daughter in Binder. And this is a slightly more complex operation. I mean, we've all probably made some web application where you upload a picture. But, in our case, we store it directly on Amazon S3. So, what we do is we have the web server for a token so that we can upload it to S3. Then, when we're done, we tell the web server that we're done. And that moment, it decides that it needs to check if the file is actually correct or if the file is actually an image and not a video or I would like to know all the information that's in the accept data that's in the picture. That's all backend processing. So, there we use the entire stack. So, in this case, the front end application would tell the backend stuff, please give me everything back from this image, identify it, make sure it's really an image, not a vulnerability or whatever. When that's done, it asks the server to generate a thumbnail. Then, it generates a web version. It generates any other type of image that we requested that it should do. And, of course, our application is fantastic. So, you can configure all those things. But, it is asynchronous. It goes back from backend processing to the message queue, then goes back to backend processing, and it goes back to the front end application. In the end, though, you get a fantastic picture of my daughter in the application, which is what everybody wants. Yep. So, what you end up with in the end, and this is the official picture that we share with people, is this crazy architecture. You can tell we're kind of deep into Amazon. And I don't mind, really. But, you can see that the entire stack is divided into a lot of things. There's a lot of things going on, and everything is separated. For example, we use just the normal load balancer of Amazon to relay the traffic. That's our ingestion. Then, we do some web application firewalling, which is also one unit doing the circuit things. That relates traffic to the web instances. That does some things, and then there's a little error going up to a message broker, and all the way in the back are all our processing things. We've only drawn six boxes, but there's a lot more. And, obviously, because we're pretty cool guys, we do this all with automated deployment for our DevOps people. That's actually a thing I forgot to mention. One other pro of doing this kind of system is that your DevOps people get a lot more Jenkins jobs to do, which they apparently like. I'm going faster than I thought. That's okay. So, in the future, because scaling is still a thing that you need to do. Shared nothing remains extremely effective, especially with all the tools that you have from Amazon, for example, or Google, or OpenStack Delivers. It's super easy to deploy an entire shared nothing architecture anywhere in the world. However, there's a new thing coming up called serverless architecture, where you can essentially do a lot of things about anything, really, instead of deploying your servers and you're managing your own hardware or you're worried about CPU and memory, you can just deploy some code. That also works. I'm not a fan, though, because I like getting my fingers dirty with system operation things. So, I stick with shared nothing. However, serverless architecture, if you can get into it, is a fun thing to check out. Now, this guy just pointed the five-ten minute plate at me, but I'm actually at the last slide right now. Thanks. Short to the point and very efficient. I love it. First question here. You mentioned that you use Celery to distribute the tasks that have to be done. How do you integrate to have a synchronous execution with serving requests, which need to maintain connections, it's easier. You mean that when you do a Celery request, you got to wait for the results to come back to you? Yep. We try to avoid that kind of system, purely because when you are doing that, you got to make something block in your entire stack, and that's not the most efficient way to get there. So, what we try to do is we tell the user, okay, it's coming, and then the front-end application starts pulling, or it gets a notification from, okay, this is done, please pick it up. So, you use no web circuits? Sorry? Do you use no web circuits or stuff like that? We could, to give the message back at some point, but I don't really like having an open connection to a server that can die at any second. So, I'd rather go for long pulling or server push. More questions? Yes? Thanks for the talk. Maybe an answer for the previous question will be a push room, a room in push to communicate back with your client or something, as an idea. And I have a question about the broker you are using for Celery. Can you give me a little bit of details? About the different brokers or what have you found about it? So, you're asking what kind of different brokers there are, right? What are you using or why are you using the broker you are using? Because I have very similar architecture in my project, and I'm dealing with that right now. Okay. Not really sure. We use Redis because we like Redis a lot, and we already have it available anyway. But there are other brokers that make a lot of sense when you're starting to scale up. I try to remove layers as much as I can, but for example, a thing like Rebit MQ is very good to use if you want something with more control or if you want to say more things to the applications using it. More questions? How about up at the back? Exciting zone for questions usually. You've got a little bit more of a contemplative standing a bit further back. It gives you more of an opportunity to formulate an insightful or possibly a stupid question. There's prizes either way. What about up the front? Okay. Question number two. There's definitely a prize for that. So you have mentioned that you use Amazon as a backend. Have you actually tried any other cloud provider? Like Google Cloud or Azure? Yes, we tried. Internally, we're looking at getting into Opus Tech just for our internal environments, but for our production stuff. AWS is very useful to us. We are in a deep strategic relationship with them, so we're not going away from there. But obviously, we look around. We see what's going on and we use some other providers that I'm not liberty to say to use certain things from. For example, your backups, you wouldn't keep those at the same location as your production clusters. So that's where we go elsewhere. Just out of academic interest, though, we try to play around and see if we can deploy on other things whenever we go. Any more questions? Fine. All right. Well, then that lacks us. And nice and early. So you've got to wait. Wait, wait. There is. Ah, brilliant. Here we go. Thank you, sir. Just again about salary. What kind of guarantees does it provide? I mean, is it exactly what every message is delivered and so executed exactly once or at least once? I don't know the gritty details. But I, as far as I understand, when you get a message out of there, at some point you have to sign it off as well. That the market is complete. I think that's a thing that salary manages for you. But I would assume that works. We can go to your drop messages at least. Any more questions? So I have one, I guess. Surprise. So, you know, you could go for an option like you've done. You can have a three-layer architecture where you go, oh, I don't want to spend too much time doing loads of work in my application server. I'm just going to farm off that job to a salary queue. And then I can update the user when the bits are ready. You know, like, why bother? So, you know, that means you can scale your application server and your salary server separately. But at the end of the day, you've still got to do that computation. So what are the real advantages of saying, okay, rather than having three application servers and 20 salary servers, why don't I just have 23 application servers, I can still do all the client-side kind of things. And I can do that by using individual view. I can upload my image in one view. I can resize it in a different Ajax request. I can request a thumbnail in a third Ajax request. I can request the metadata in a fourth one and so on. The main answer to that one lies in the simpler applications kind of thing. So when you build a smaller simpler application, there's a lot less to check. It's easier to verify. It's easier to test. It's easier to deploy. And the function that you did just has one exact responsibility. And that is doing what it is to do. If you say, I'm going to put everything in the application server, you'd suddenly have a whole large code base with separate tracks to getting there, making sure that the code operates well with each other. You cannot build changes in one system that will have no effect on the other. I think it makes life simpler if you just pull them apart. And when you ask a thing, say, can you say A? And it says A. And it will never say B. Okay. Does this help? Actually, on more, you can, it was to continue his answer, you can optimize services. So for example, if you have a Cping service, if you have everything in the same application, you would have to, for example, if you are using Python for everything, you either have to go with C binaries. But if you could actually change the backend and say, this is the other service, it's written in assembly, and I don't have to maintain it, but it's super effective. Any questions rather than answer to questions from the floor? Anything else? Rack your brains, everyone. Here's our speaker. He's spent loads of time preparing his slides. In that case, thank you.
John Kraal - High Availability Scaling with Share Nothing Architecture Scaling a project to a worldwide scale with the same performance and availability in every region using Python isn’t easy, but with the right mindset and tools it’s a very viable target. ----- We will discuss methods of delivering software, with automated scaling systems, building units out of your project to manage separately and how to reliably and securely distribute data to separate clusters, and how we have achieved this with the use of Celery, Redis, Databases, Protobuf and other modern tools, whilst making sure to highlight our pitfalls and successes
10.5446/21165 (DOI)
Welcome to Jose Manuel Ortega with ethical hiking with Python Tools. Good morning, thank you for coming. Well, this talk is for commenting the main tools that we have in Python ecosystem for obtaining information for servers that is pulling information in the Internet. Well, this talk is about my speaker in Spain where I have another presentation with scraping mobile security and so on. These are the main points I will talk about. I will make an introduction to Python testing. What are the main tools that we have for commenting the main modules like shockers, request, beautiful shoes and shunt for obtaining information for pulling servers. Later, I will mention how to extract metadata for documents and images. And finally, more advanced tools like poor scanning and how to connect with scanner vulnerabilities. And finally, I will show a little proof of concepts where I have integrated all these modules in a Python testing tool. Well, Python is a little introduction. Python is very useful for making rapid prototypes and proof of concepts. And many of the tools that we have for testing security in database and web applications are made with Python. And the main other of the other apprentice that we have is that they are very good documentation in Internet for all these tools that we will comment. Well, for example, one of the main tools that we have for testing applications, for example, SQL map for testing SQL injection vulnerabilities and social engineering tool kit. These tools are made with Python and we have another tool like Sparta. Sparta is a poor scanning that uses Python NMAP. Python NMAP is another tool that I will comment after for checking the ports that are open in a specific domain, server or application. Basically, with this tool, we can check the service that are open and make a process launch a brute force process over a specific service and so on. Another tool that we have that are interesting for analyzing is the Harvested. It's a tool for obtaining information, for information from what this tool is you can use for obtaining information about the domains, email accounts and domains for a specific URL or server. And another very, very, very new tool is the Web Application Attack and Only Framework. This is another tool made with Python. This is very useful for auditing web security and detecting vulnerabilities, SQL injection, cross-scripting and so on. And another tool that we have, for example, SCAPI for analyzing network packets, and for example, if we want to detecting some attacks, SQL injection, you can use these tools. Another tool, another tool that we have is FIMA for example for detecting remote filing, including vulnerabilities. All these tools I have commented are made in pattern and well, I will show the main modules that we can use for developing our tools for testing the security of the servers and web applications. Well, the first proof of concept that we can see is Shokets. With Shokets module, basically what we want to do is checking our security scan. With ConnectEx method, we can check in a specific port if the port is open, filter or closed in a SQA. This is the most simple program that we can write. Basically what we do is a question to the user, the IP of the server and starting port number and ending port number. And with a for loop simple, we check if the port is open, closed or filter. The difference between, for example, in the closed and filter is the filter, the port is filtered where it's blocked by a firewall, for example. Shokets allows also obtain information, resolving the IP address from the domain and vice versa. With the methods getHouse by address and getHouse by name, we can obtain this information. With the Shokets module, also we can check, we can obtain the partner servers. The partner servers is information related with the name of the version of the server, the word server, for example. In an easy way, we can check, for example, this is an script where we pass as parameter the IP address and the port and retours information about the server. In this case, we see that retours that the domain, EuroPython 2016, has an inject server. Well, another of the modules, very now for all Python developers, is request. Request is a very useful module for testing web service, IP res and so on. And basically what we can do for testing the security of the site, for example, we can check, for example, the headers of the request and the response in an easy way. Accessing to the headers property and iterating over the item sectionary, we can obtain this information. For example, if we check over the EuroPython site, we see that we obtain this information, the inject server, and also we see another headers like cookies, the user agent and so on. Another interesting feature that we can make with Request is, for example, it works behind proxy for making requests, we can use the proxy dictionary where we indicate the HTTP or HTTPS proxy. And in an easy way, we can check the connection behind proxy with Request. And another interesting feature is with Request, we can authenticate the, we have a server that supports basic or digest authentication. We can check this information, we can check the security of this server with the methods, HTTPS and the basic. Well, another tool that we have, for web scrapping, for example, is BeautifulSoup. BeautifulSoup is basically a parser, it's a parser. Basically what this does is that it extracts information from a specific task that we extract, that, for instance, we use Request for obtaining the page and with method final, we recover the information. In this case, we recover the links. And more advanced program where we extract the internal and external links. Internal links basically consists in finding all links that begin with a slash. And for external links, we try to find all links that start with HTTP or HTTPS that not contain the current order. In this example, we can see that we extract the external and internal links. The external links go to an external page, I know this, the same domain that is not in the domain that we are testing. And internal links go to pages in the internal domain. Another interesting feature, for example, is we want to extract image and PDF documents. We can use a specific parser. BeautifulSoup has two or three parsers. We have seen the LXML parser and in this case, we are using the HTML parser where we are using SPAR expressions for extracting the element that we want to extract in this case, image and PDFs. Another interesting tool that we have in BiodecosystemBrain is Scrappy. Scrappy is for developing. We can use for developing our web spiders, web crawlers. It's very useful for obtaining the information for web service and URLs. And this tool is a tool for making asynchronous calls and following the event-driven development paradigm for making these calls in asynchronous mode. Well, for the same thing with Shodan, Shodan basically is a useful tool for obtaining pooling information that are available on the Internet. Shodan, what is that is, obtaining the banners of the servers, operating systems, the versions, the server styles, and so on. And basically what this provides a developer API for, in this case, for Python developers. And in a nice way, you can connect with this service, throw a API Shodan API key that provides when you register in the site, you get an API key and with this API key, you can search, we can make the same search, the same search that we can do in the web, you can do with the Python API. In this case, we can see that we look at an information for a specific host and we obtain the posts that are open, retouch the banners of the server, especially the information of the services that are open in this port, and so on. Also, where we search for a specific host, we obtain information about the host name, the posts that are open, and the service that are available in each port. This information that we see in the web, also we can access with Python in a nice way, but the information for accessing is a little tricky, it retouches in a dictionary, but it's not a K-value, a detected K-value, it retouches the information in a dictionary, but certain positions have a vector and an array, and you have to play with accessing the information. Finally, we can obtain information, the same information that we have seen in the Shodan web for a specific host, we can obtain this information from Python API. Another easy module that we have in Python is WillWidth. WillWidth is a very easy module that only has one method, parts, and obtain the information about the frameworks that the website is using, and the web servers that are using the website. Well, for metadata, analysis metadata, basically what we can do with Python in an easy way is extract information from metadata from PDFs with the PyPDF2 module, and basically it's very easy with the pay, what we can do is create a PDF file, a rated object, and with the get.get.com info, we obtain the metadata of the PDF. The same we can do with image with a pill.exe tax module, and in a nice way we can obtain, we can decode the tax that are available in an image. For example, we can obtain the GPS info, the data of the image, when, how is the resolution of the image, and so on. Well, another little advanced feature that we can do is poor scanning. It's very no for the NMAP tool that we can see in multiplat for operating systems, and so on. With Python NMAP, we can launch the NMAP from Python. Basically what we have is two modes, synchronous and asynchronous. For synchronous mode, we have to launch the poor scanner, well, we instantiate the NMAP, the object NMAP with poor scanner, and we use the scan method where we pass as parameter the IP address and the port, the port listing that we want to scan in this IP. This is an example where we define a class NMAP scanner. We initialize the NMAP scanner with calling the poor scanner of the NMAP, and with inside the NMAP scan, we go checking the port, and we are calling the scan method. An internally scanned method called the NMAP command that is installed in the operating system. We can see information about, this is a call sample where we pass as parameter the target and the port listing, and we can see that the NMAP is executed with the default parameters in the port specified and in the IP. We see on retours if it's open and also we can access to the, if a specific port is open, we can access the version of the service and so on. The other mode that we have within NMAP is asynchronous mode. As in cruise mode allows launch and scan in a lot of ports simultaneously. We can define a callback function for where a specific scan in a port is finished, call this function for an additional straddlement in a specific port. For example, in this case we launch in a specific target in the 21 port, the VTP port, and we check, firstly we have to check if it's open, and if it's open we can launch a specific process for detecting vulnerabilities in this port. For example, in this case we are testing the VTP port launching and scripts for checking anonymous FTP login, checking another type or vulnerability, if the service is vulnerable to a specific backdoor, for example, or specific versions to this service. For checking these vulnerabilities, when we start NMAP, NMAP comes by default with NMAP scripts that are in the mass script folder in the instance when you start this tool, and have a lot of scripts for checking vulnerabilities in a specific service, like FTP, HTTP, and MySQL for example. Basically what provides these scripts are routines for a fine potential vulnerability in a given target. And then the idea feels we have to check if the port is open, and if the port is open we can launch the specific script for this service. In this case we are checking that if the MySQL port is open, and if it's open we are launching scripts like MySQL audit, MySQL improved, MySQL databases, these scripts provide more information about the service, and checking for example if the database is open, doesn't have security for example, or you can see the user without authentication, these kind of things you can see with launching the scripts. Well for example with Shonan we can check the FTP login anonymous. Basically with this search we can check all servers or machines that are loaded this type of login with all anonymous is the login anonymous, we don't need to provide user and password for accessing the FTP server. For checking this in Python we have the FTP module that we can see in a nice way, we can check if a specific server allows anonymous login. Well for checking websites we have another tool called P with booth, P with booth provides resources for checking websites basically with predictable URLs, that is to say we have a list of URLs, and each URL, each resource, we will see with an example. For example we can access, in this example we obtain predictable URLs for a specific future, for example for logging in a website we get predictable URLs like admin, login, default web page, admin, and so on, and what we can do is testing for each predictable URL, testing this predictable URL over the domain we are testing. And in a nice way we see that we can check, we make a request over the domain for each predictable URL to see if we can access or not. Many times there are URLs that we don't see, they are not public, but after doing this type of testing we see that there are URLs that are filtered or are not protected, for example, and we can access and navigate and discover other vulnerabilities in the site. Well, the Herblade book is another book that we can test with Python. Herblade is a vulnerability in a specific open SSL version in servers, and this book was discovered in 2014 and well, it's a little old, but with this, we check this page, FilippoioHerblade, we see that there are servers, and there are servers that are also vulnerable to this work. Basically, for testing, if a specific server has this book, basically what we have to do, we can use the socket module and set a specific request, a specific package, and if the server responds with a specific payload Herblade, the server will be vulnerable to this book. In this case, we can see an example where we launch a testing over a specific machine and returns that this machine is vulnerable to this book. Well, a more advanced tool that we can use for integrating with Python, for example, with Metasploit, Metasploit is a very useful tool for sending exploiting vulnerabilities in the servers, in the websites and so on, and Metasploit can be integrated. It has a module called Python MCF-RPC for making calls to Metasploit server from Python. Basically, what we can do is, we have to do is start the server, the service of Metasploit throughout a plugin, and what internally Metasploit functions like modules for testing, for exploiting vulnerabilities, in this case with Metasploit API call, we can see that with calls in a form specific called MCGPEG pack, we can launch a specific module. In this case, we are using the MCGPEG login for launching, for testing this exploiting from Python. Next post, Nexus and OpenBus are security analyzers for vulnerabilities and so on, and also we can integrate these tools from Python. For example, it's really easy to integrate from Python, for example, if we have a server with next post where we have vulnerabilities and reports and so on, we can connect with this server from Python using V2FullSoup, we can access to this information. The information that retours the server is in XML format, and in a nice way with V2FullSoup, we can iterate over the vulnerabilities and sites that are deployed in the next post server. Well, I will show now a pen testing tool that, this is a quick provost concept that has integrated all modules that I have mentioned. Basically, what we can do with pen testing tool is, for example, with a specific machine, a specific port, we can check in if the port is open or not. In this case, we are checking the 21CTP port to see the information that retours. In this case, we can see that the port is open and we can obtain information about the specific version of the CTP. The name and so on. Like the CTP port is open, we can check if the CTP server allows anonymous logging. And then if we go to the 13th option, we see that retours that the port 21 is open and question about if you like to connect with an anonymous user. We question yes and we see that, low-insuccessful, retours 230, the version of the CTP, the connection is OK and shows the directory of the server. More things that we can do if we obtain the headers of the server, the headers info. In this case, we obtain that we have PHP 1.0.5.10 if it's ruining an app and a patch with the version 2.2.8, the version of the PHP and so on. All this is the headers info. More things that we can do, for example, check if the CTP server has a buffer overflow vulnerability. This is another vulnerability that has a lot of servers. We can check with this option. First, we check if the port is open, the port is open, and we send a client request to see the server is vulnerability. We connect to this IP in the port 4444 and we check that the server is vulnerable to this book. Basically, all these testing are over a virtual machine that I have here in local. This is a virtual machine that has a distribution with a lot of vulnerabilities. Many ports open and a lot of vulnerabilities are related with PHP, PHP, Apache, server and so on. This is called a metasplotable Linux, if I remember. More things that we can do is check domains and obtain metadata. For example, if the server has information related with mails, hosts, other servers or URLs that are exposed in the server, we can check this information. In this case, we see that we are testing the options method that retours the server. We can obtain the emails that are public in the server. Also, we can check, for example, if the server is public, we can check the showdown information, that retours. For example, we are using the europaidon site for checking all the information retoured by showdown service. We launch the tool with the target europaidon 216 to 216. It retours the IP address of the domain. For checking, for example, the host info from showdown service, we can use option 6 to disconnect to the showdown service and obtain all the information. It's time for questions. Retours, all public information that we have seen in the presentation, we see that the posts open at 18, 22 and 25 and retours more information about for each service. We obtain information about the servers, the banners, the versions of the server. We obtain the option 10 for scraping image. Finally, this project is available in MagiHabit repository. It's available for if you want to check the tool, you can do free. In MagiHabit repository, we have a small script for testing each functionality separately from each order. For example, if we want to scan and to launch an enema scan, we have a specific script for this feature. Finally, reference and leaves are the main, the official pages of the tools that have commented the showdown documentation, request, Python enemap, documentation, and in the Python security.org, there are available more libraries that have commented and it's a very complete site for checking these kind of tools. Finally, books that we can find, this is the main books that we can find for this topic, for testing and so on. Thank you. Any questions? Thank you.
Jose Manuel Ortega - Ethical hacking with Python tools Python, as well as offering an ecosystem of tools for testing security and application pentesting.Python offers a tool ecosystem for developing our own tools security for testing applications and the servers security,identifying information about servers and potential vulnerabilities. The ultimate objective is show a pentesting tool integrating some of the modules commented and try a demo showing info about our domain target and find vulnerabilities in it, ----- Nowdays, Python is the language more used for developing tools within the field of security. Many of the tools can be found today as port scanner, vulnerability analysis, brute force attacks and hacking of passwords are written in python. The goal of the talk would show the tools available within the Python API and third-party modules for developing our own pentesting and security tools and finally show a pentesting tool integrating some of the modules. The main topics of the talk could include: **1.Enter Python language as platform for developing security tools** Introduction about the main libraries we can use for introducing in development of security tools such as socket and requests. **2.Libraries for obtain servers information such as Shodan, pygeocoder,pythonwhois** Shodan is a search engine that lets you find specific computers (routers, servers, etc.) and get information about ports and services that are opened. **3.Analysis and metadata extraction in Python for images and documents** Show tools for scraping web data and obtain metadata information in documents and images **4.Port scanning with tools like python-nmap** With python-nmap module we can check ports open for a target ip or domain. **5.Check vulnerabilities in FTP and SSH servers** With libraries like ftplib and paramiko we can check if the server is vulnerable to ftp and ssh anonymous connections.
10.5446/21167 (DOI)
The title is Per Python Adastra. Please welcome Juan Luis Cano. Well, as the previous speaker said, there's a Python library for everything in life, even for rocket science. Well, let me introduce myself first. My name is Juan Luis Cano. I'm an almost aerospace engineer studying in Madrid. I'm working in finance as a Python developer for VVBA. I'm a self-taught programmer because in university they used to put us a little bit of MATLAB and we run all of our algorithms in Excel, so it was not a great background to start with. I'm passionate about open source, open hardware, open science, and its relevance in the world that we live in now. I'm also the chair of the Python Spain nonprofit and organizing many events like the Python Spain conference, the Python Madrid meetup, and I remind you that the Python Spain call for purposes is still open in case I didn't make it clear yesterday in my quick lightning talk. And, well, space, fascinating. Well, I found something very amusing about space and is that almost nobody knows how it works and what's going on up there, and yet it's like the only field adults are willing to accept their ignorance and ask all kinds of questions. And I say adults here because children have this amazing superpower of asking almost everything and this infinite curiosity that adults start losing with time. Well, so, wait a minute. Before explaining what is exactly this astrodynamics thing and giving you any Wikipedia definitions, let's start with a little video. You might recognize here Clark Kent from the Super Mario Returns movie, and he's watching live through his alien eyes and this ridiculous haircut and wondering, what if I use my super strength to put this baseball in orbit? And so it goes. Bye-bye. So, then the dog quickly runs to catch the ball and realizes the situation and turns his back to Clark like seriously, and then the bot hits some random guy in New Zealand or something. So what's happening here? Well, Superman is super, super strong, so if he's launching the baseball very, very quick, then he's going to reach a very long distance, and as the earth is round, well, there are no flat earthers in this room, right? Because I'm going to disappoint you a lot. You might leave now. Okay, as the earth is round, then the ground is starting to curve under your feet, so the ball is hitting some point to the other side of the world. Eventually, it's going to reach like New Zealand, the other side. And if you launch the baseball even quicker, then at some point, the ground is curving so fast that you never touch it. And this is what we call orbital velocity or orbital motion and free fall, because you don't actually need any propulsion or any means to, like, increase your velocity, and you are just falling all the time. This example is not mine. It was devised by Newton in his masterpiece of the 17th century, Principia Mathematica. And he's one of the earliest examples of a thought experiment, but obviously he didn't use Superman for the analogy. The title of the track of the pieces in Latin, and we will talk more about Latin at the end of the talk. Keep this in mind. Well, so with this in mind, what is extra dynamics exactly? What is a branch of celestial mechanics that studies the motion of human-made objects through space? There is a very essential, a couple of essential differences between studying the motion of the planets and the motion of human-made objects, because the satellites, rockets, and stuff are so small that we have to take into account all the perturbations that might act to them. And also they have propulsion means so they can act on their own trajectory and correct the velocity. And this complicates everything. Well, and this is where the introduction stops. I'm going to put a little bit of math, but I'm going to try to keep it very simple. I don't need everybody in the audience to understand everything, but I just want you to keep in mind the ideas that are behind these kind of problems. I'm going to talk about the basic problems that we solve in astrodynamics, and later on I'm going to say how do I solve them in Python? Well, the first one is the two-body problem, which is just one body orbiting around another one. In the limiting case, we are considering that these masses have no radius, so there are just geometric points in space. And as we are usually considering the motion of a spacecraft around a planet or a moon or something, then we can assume that the second body is very, very small and doesn't have any effect on the orbit of the first one. That is the equation that controls everything. And the second one is the Kepler problem, which is like the initial value problem of the thing that I said before. I have some state and some moment in time. I have a position and a velocity, and after some time, I want to know where my satellite, my spacecraft, whatever is going to be. This is called also propagation. And these are the equations for the elliptical case that govern everything. And I want to put this here because that equation over there, the first one, if you remember your secondary school mathematics, you cannot solve that equation for E, for capital E. And that is the, for some people, they say that this equation is so difficult to solve that it motivated 200 years of mathematicians to develop many different and innovative techniques to solve it, and we made huge progress in mathematics thanks to the structure of this equation. And the last one is the Lambert problem, which is a little bit different, but still based on the same thing. I have one position and I want to reach another position in a given time, so I want to know what is exactly the trajectory they have to develop. In the early days, like when we are designing a trajectory around the solar system because I have some mission, as I will say after this, we can assume that all the planets are like points and only consider the gravity of the sun. So, to solve all these kind of problems, I created Polyastro, which is an astrodynamics library written in Python. It is released under a permissive license and it has physical units handling. It solves all the problems that I said before. It includes some basic 2D plotting, as we will say after this. And it would be impossible without the work of many, many people. I'm going to talk about a couple of the dependencies. The first one, in case you don't know, it is AstroPy, which is like a basic astronomy library written in Python. It is a joint effort of many, many developers around the world. It is meant to have the very building blocks of any astronomy project that you might have. For instance, it has physical units, which is static typing for engineers, because if you mix meters with miles or something like that, then very bad things start to happen. It has also handling of dates and times. If you think that handling some time zones is a pain, then you better don't enter the astronomical times. It is a real mess. And it also changes the compression between reference systems, so I can express one position with respect to the sun, with respect to the problem, et cetera. The second one is JPL-EFEM, which is a library by Brandon Roth, which is one of my favorite Python developers. The thing is that the NASA and the Jet Propulsion Laboratory, they provide some planetary positions and velocity in very broad range that goes between hundreds or thousands of years. They provide them in a binary format, which is called SPK. With this library, I can take that data and know exactly where a planet is going to be in the year 3000. What happens with the basic algorithms? This involves integrating the equations and stuff like that. When I started working on this, I said, okay, let's see what have other people done on this before me. I found a lot of Fortran, Matlab, Jiva algorithms that were okay, because they worked and they had a very good performance. But the code was a bit poorly written. There were no tests whatsoever. They were very difficult to distribute because they were in the works of my computer state released to the internet in a zip file. Wrapping those algorithms in Fortran or C++ or whatever from Python is possible, but it might be a challenge. I ended up with a thing that only was known to work on my computer. Then some years after that, I discovered Namba, which is a project by Continuum Analytics, so free. It's meant to accelerate the code, the numerical Python code that uses a lot of number crunching, numerical computations, non-pattern arrays. It supports a subset of the language and compiles to LLBM, which is the compiler toolset that is getting very famous now. And also it supports GPUs. So I tried to rewrite all the algorithms that were included in thousands of lines of Fortran only in Python and see how it goes. And these are the results of a paper that I presented to the European Space Agency some months ago. And if you can see here, the top line was the previous version compiled with the Intel Fortran compiler, which in theory is one of the best ones. There is the reference for all the performance measures. With G Fortran, it was a bit slower, like I lost 30% of the performance. And then you can see the bottom line is the Python code, which is like two orders of magnitude slower than Fortran, which is the expected result. And then you have here this Python plus Namba result that is visibly slower than Fortran, but still more or less within the same order of magnitude. So I said, well, I'm going to throw to the trash bin thousands of lines of Fortran, a lot of pain. And in return, I'm going to lose 70% of performance that in any case I can optimize later or wait for the technology to build up. So this is more or less what I did with this Fortran code. Yeah, I was very happy to throw all this away. Because now the people that know Python, which are much, much more than the people that know Fortran, can easily contribute to my library. I can understand the code 10 months later after writing it. And the distribution is much easier because I don't need to force everybody in Windows to have a Fortran compiler. And in any case, who knows what's that anyway. So to give a practical example of this, as this talk was presented in the hot topics called for papers, I wanted to bring something really, really hot, which is the arrival of the Juno mission to Jupiter the other day. If I can press this link. Thank you. Okay, wait a minute. The Juno spacecraft was a mission that NASA launched in 2011, as you can see here. And it arrived to Jupiter two weeks ago. So it's been quite a long trip. And the trajectory was pretty involved, as you can see. There you have the orbit of the Earth in August 2011. And the first thing is launching this Juno spacecraft in a very wide orbit that even crosses the orbit of Mars. And you always know fuel in all these arcs here. And what it's going to do next is going to perform an over over here in the point that is most far away from the sun to correct the trajectory and try to encounter the Earth in a different point. So exactly at that point, without losing any fuel, it's using the gravity of the Earth to change the trajectory and go to the orbit of Jupiter. Well, the video goes blah blah blah. And when we arrive to the end, I remind you that it was launched in 2011. And in July 2016, this arrived to Jupiter. And this is like cosmic billiards because there was the planning of this trajectory involves a lot of man hours and you have to take into account the positions of all the planets and for me it's so beautiful. And so what we can do now is reproduce exactly this orbit with Polyastro. So if I go to this wonderful IPython notebook, let's do this a bit quickly. Okay, so here I'm importing a lot of modules from Polyastro which include like the vision of the planets of the solar system, the sun, some objects to provide an API. And here for instance, what I'm doing is downloading these files from NASA that I told you before to compute all the positions of the planets. I already have them in my computer. And here are some data that I got from the internet like the date of launch, the velocity of the initial manoeuvre, the date of the flyby of the Earth and the date of arrival. So the first thing that I'm going to do is to recover the position and the velocity of the Earth in the date of the launch. And I can have here a couple of vectors. And as you can see, this is handling physical units using AstroPy. So if I use these high level functions that I'm providing with Polyastro, there is no risk of mixing physical units. If I provide a vector in kilometers and another one in meters, then everything is going to be in order. And if I provide some incorrect unit, then it's going to warn me. So I create some state which is going to hold some variables that we need later. And I do the same thing computing the position and the velocity of the Earth the day of the flyby. Okay. So then I'm going to use these manoeuvre objects to say, okay, now I'm on the Earth the day of the launch. And I'm going to do the first impulse to get into the first orbit. So if I apply the manoeuvre and I see the period of the orbit, this means the time that it takes to one complete orbit to complete. Then we see that it's above two years. So the period of the orbit of the Earth is obviously one year. So now I'm spending two. So I plot this thing, then I have the position, the orbit of the Earth and the first orbit of the spacecraft. If I go on doing this propagating and computing some more velocities and data that I need, then I have not only the position of the Earth and the first orbit, but also the point where I'm correcting the orbit to encounter the Earth one year later. So if I go on using these functions that you can check, I'm going to upload all the materials and plot this. As you can see, the API is pretty simple. Then I have this complete plot of all the segments of the orbit. You can see here the orbit of the Earth, the first segment, then the correction. This is the point of the flyby and then this is the last arc until I go to Jupiter. I wanted to stop here because there are some limitations in the API of polyester, because for instance, I'm plotting all the segments that I don't travel through, for instance, like this one, so there is a little bit of noise in this plot. And also the three-dimensional API is not existing yet, so I will go on many requests. So going back to my presentation. Well, the conclusion of this is that Python only works as a language, but it can be fast enough using some tricks for some purposes. And we can optimize it later and improve the readability and everything. The ecosystem of libraries that we have for solving these kinds of problems is amazing and people are putting a lot of work into this, and it powers a lot of different projects. There are several things missing in polyester, as I told you before. And the good thing is that open development, developing everything on GitHub, putting good documentation, writing tutorials, is key for encouraging collaboration and making this as easy to develop as possible. Before finishing my talk, I wanted to explain the title because it's a Latin catchphrase that used as a motto of the Royal Air Force, which was per ardua ad Astra, through struggle to space. This open source thing, it's many times a struggle. Maybe you have felt it in the past, especially pushing it to businesses and companies. So I wanted to title it per Python ad Astra to reflect that fact. And also I wanted to put again the picture of the International Space Station, which is a collaboration between the United States, Russia, China, Europe, and many other countries, and for me, it means that even through political differences and historical differences, we can collaborate to build great things. So thank you very much. Keep dreaming. Don't lose your curiosity. And thank you. APPLAUSE Thank you for a very nice talk. Do you have any questions? Yes? This was awesome. Thank you. And now I want to say, I have a question. I read once that when we went to the moon, they were using it that if you looked into the source code, they were using six decimal places for pi, which was funny because usually we try to use like a lot, and no, you don't need that many, and you can put people there. So could we use this thing to send people to the moon? No, and I'm going to explain why. Well, the first is that contrary to popular belief, you don't need that many decimal places for pi, and if you use like 10, then you can approximate the circle of the universe to the size of the human hair or something like that. It's ridiculous. The thing is that with polyastro, I'm taking into account only this problem that is like only assuming that my body is very small, and also with the Lambert problem that is calculated directly from point A to point B. And you see, I'm only taking into account one body of gravitational attraction. And when you are going from the Earth to the moon, you cannot do that because the moon is very big, it's very close to the Earth, and in all the trajectory you have to take into account both bodies. So for now, we cannot use it to go to the moon, but we can go to Mars. The moon is very boring, there's nothing there. Very well, do we have other questions? Yes? You recreated genostrious theory, which is great. Sorry, can you repeat again? Recreated genostrious... About genostradiatory, yes. I don't think I have a picture here. Yes. Here, this one. I was wondering how much time did it take to recreate it? Sorry, I didn't understand. How much time did it take to recreate it? To recreate it? Yes, to do this... I did it in real time, right now. Like, I just did it. I didn't have anything... Well, I calculated the notebook an hour before, but I restarted it, so I'm computing everything on the fly. Like, all the algorithms, like going from point A to point B, they are extremely fast now. And the complete library, like, in real life, is taking like five years or something, or six years. Other questions? Yes? Hi, good talk. My question is, if from point A to point B is super fast, what would be a challenge on computing or whatever for this library? Yeah, the thing is that, well, I didn't say it, I think, but for many practical problems, you have to compute these solutions thousands of times. For instance, when you want to optimize an orbit and say, okay, I'm doing to try this billiards thing, for instance, there was a contest some months ago, and there were solutions like, I don't know, go one fly by on the Earth, then Venus, then Mars, and then Jupiter, or there are many combinations. Well, you can imagine that there are many, many combinations, and you have to compute these solutions thousands of times. So even if this is very fast to do it once, then if you start adding up and computing this a lot of times, then it's critical to have a good performance. Other questions? I have a question myself, because I didn't exactly know what you were presenting, but I tested polyester because I'm doing calculation with orbit dynamics, but with satellites, low Earth satellites. Okay. Are you going to add the J2 term someday? No. No, and I tell you why, because this is going to be optimized for interplanetary trajectories. So for low Earth orbits, you have to take many things into account, like the thing that the Earth is not a sphere, but it's like something like a pier, very strange, and also the pressure of the Sun, because the Sun pushes you when you are in orbit, and you can actually feel the light like displacing you. So I don't think I'm going to add those, but we have a parallel project with which hopefully we will try to be more suitable for near Earth objects. So do you know any Python library that can be used for low Earth orbit? Well, you have, for example, the library from Brandon Roads. Yes. Yes, at least you can compute the SGP4 propagation model, which takes into account the orbital drag and stuff like that. So it's pretty accurate for most things, like for calculating when some piece of space debris is going to hit us in our heads. Other questions? Okay, if this is not the case, please. Thank you for getting out of school, guys.
Juan Luis Cano - Per Python ad Astra In the intersection of mechanics, mathematics and "cool stuff that travels through space" lies Astrodynamics, a beautiful branch of physics that studies the motion of spacecraft. In this talk we will describe poliastro, a pure Python library we can use to compute orbital maneuvers, plot trajectories and much more. The role of JIT compiling (using numba) to drop the previously used FORTRAN algorithms will also be discussed, as well as the importance of open source in scientific discoveries. ----- In the intersection of mechanics, mathematics and "cool stuff that travels through space" lies Astrodynamics, a beautiful branch of physics that studies the motion of spacecraft. Rocket launches have never been so popular thanks to companies like Space X, more and more investors pay attention to aerospace startups and amazing missions explore our planet and our Solar System every day. In this talk we will describe poliastro, a pure Python library we can use to compute orbital maneuvers, plot trajectories and much more. The role of JIT compiling (using numba) to drop the previously used FORTRAN algorithms will also be discussed, as well as the importance of open source in scientific discoveries.
10.5446/21168 (DOI)
Welcome to sorting things up into DevOps world. I'm here with Juan Manuel Silva, a salty guy. Give him some claps. APPLAUSE Right. Thank you all for coming. For those of you who've been here last year, I gave a similar talk. This is going to be a little more in-depth. So let's get started. Let's see. Okay. Let's get the boring stuff out of the way. My name is Juan Manuel Santos. I work as a team leader and as a board engineer at Red Hat. I'm also one of the organizers of CISARME, which is the Argentinian system administrator's community, and ERDEARLA, which is a local co-working slash tech conference event that we do every year in Buenos Aires since 2014. I've been using Salt for a couple of years now, mainly with no regrets or with all regrets, whichever you want to choose. Let me get this claim error first. So let's get this out of the way too. I am only a humble user of Salt. I have tinkered a bit with the code. I have submitted an ugly patch, but not much more. And yeah, I only have three days to prepare this, so who doesn't like pressure, right? My thanks go to the EuroPython team for managing to squeeze this talking. So why Salt? As you may or may not know, Salt is a configuration management system. In case you don't know what that is, think puppet chef unsable, but only better. And why do I say better? It's because it's written in Python, and it leverages YAML and GINCHA. Now I know some people in the room might not like YAML, but you can also use JSON if you want. It is relatively easy to understand, and I said relatively because it has some complex things, but what it lacks in simplicity of reading and understanding, it makes it up in being extremely powerful and giving you a huge amount of control over what you can do with it. Some of this will be seen in the next few minutes. One more detail that frequently gets lost in translation. Salt can work without an agent, in the case you don't have root access or you're not allowed to run the agent on your machines. VSSH, much like Ansible does. So previously in EuroPython, as I said, last year I gave a talk, this was mainly an introduction, speaking of the basic mechanics in terms and concepts behind Salt. As a quick recap, Salt has a master minion architecture, where the master is the one that gives out the orders, and the minions are ordered to do minion stuff. It does so by defining states and high states. The states are representing the state a system should be in, and the whole collection of states that should be applied to a system is called a high state. Another core concept is that of matching, which means targeting your minions and to determine which states apply to which minions. And finally, there's the concepts of grains and pillar, grains being information sent from the minions to the master, and pillar being information sent from the master to the minions. Sadly, and I have to say this, still no Python 3 support. Salt is still in Python 2. It's getting there, though. There's a big issue, hopefully we'll get there. As usual, it's not because of Salt, it's because of the Salt's dependencies. But anyway, moving on. Two more concepts that didn't make it to last year's presentation are those of the mine and the syndic. Now, the mine essentially gets data from the minions sent to the master on a regular interval. Now, even though this is done on a regular interval, this is not useful for metrics because only the most recent data that you collect is maintained. Another thing that might confuse you is that all the data is made available to all the minions. So when you query it, you might get the answer of the same data from all the minions at the same time, which can be quite confusing. In fact, you might be wondering if it isn't this like grains. It isn't what grains are supposed to do. Get data from the minions sent into the master. Kind of. The thing is, the mine that I've updated more often, the grains are mainly static, they're only updated if you purposefully update them, which is not something that you would usually do. Also, if minions need data from other slower minions, the mine acts as a kind of cache. So there's that too. And, okay. There are two ways of defining which mine functions you want to collect from the minions. In the case of normal operation of salts, you would do so either on the pillar or on the minions configuration file. In this special case of not using the agent, as I mentioned before, you have three ways. Since you don't have the minions configuration file, you can either use the roster, the pillar, and the master configuration file. And so, a quick example of what the salt mine would be so that you don't get much too confused, but promise you you would get confused. Looks like this. So let's say we want to first target all the minions in our web servers group. We are going to be applying a mine function to gather the IP addresses of the first network output every five minutes. So this we can later use, for example, in an HA proxy configuration to populate the server list. Now, I know that you might be getting baffled by all the ginger here. Try not to think of it. The important thing to understand here is that should we add a new host to the web server group, within five minutes we can have its IP address up in the HA proxy configuration file. This is all thanks to the mine, which we can configure the interval of updating. Now, before we continue, since we already mentioned that salt has a master minion architecture, there's an inherent topology to it. So let's talk a bit about that. The most common one would be one too many, meaning one master many minions. But of course, this is boring, this mine not scale. This also kills cat during lunar eclipse. So what are the alternatives? How much can we toy around with this? Could we have, say, more masters? Could we have a multi-master topology? And I don't know if there's any information security guys, but if you are, you're going to love this question. Could we implement segregation? Meaning could we segregate part of the infrastructure, split it so they don't communicate with each other, but there's still a functioning salt infrastructure. And coincidentally, now I'm wearing the right hat apparel. Let's answer those questions with another question. So what if we try more power? So to solve this, there's something called the syndic node. The syndic node is an intermediate node type, which acts as a pass-through. The aim of it is to control a given set of lower-level minions, which means that in the case of the syndic node, we're going to be having two demons, the syndic and the master. Optionally, you can run a minion too. So the way it works is something like this. The main master, which now we're going to call the master of masters, you're going to see why, even though it's already a funny name, sends an order to the minions and to the syndic node. The syndic node relays those orders to the local master that is running in the same machine, and then that master gets the orders and relays them to the lower minions. So now, our syndic node is actually called the master of minions. And, well, this of course works the other way around. When some of the lower-level minions reply to any orders, they go first to the lower-level master, then to the syndic, and then up to the main master. So if we have the master, which now is our master of masters, it can have as many minions as we would like connected to it, then we can have a syndic node, for example, a master of minions node, which can also have any given number of minions connected to it. But the good thing about this is that we can even nest levels of syndic, one over the other, and have as many minions as we like. So the topology here is kind of up to you. So the only places where you're going to have to ensure connectivity is where the lines are. So how do we actually do this? The configuration is quite simple. On the syndic node, we're going to be setting the syndic master directive. This should point to our main master. We also have to define an ID here, because the syndic node takes the ID from here. Then on the master node, of course, we have to tell it that we are now ordering other masters. We are now in control of syndic nodes. In the case of the minions, they should have the lower level minions. They should have the IP address of the syndic node in their configuration file. Just a few more steps. We run the syndic node, of course, and on the main master, we're going to have to accept the key, because essentially there's a new key that gets generated. So now you might be getting the idea that behind this talk is to make you think of the possibilities. You could have different syndics per environment, development, QA, production, and you could also have different syndics to comply with some security standards that you might have, that you might want to come up with. Just to mention it, we can even do multi-master with this. We can have syndics and many masters, main masters. We will not cover it here, but just know that this is possible. So that's it for mine on the syndic node. Now we're on to more heavy metal stuff. Our first step here is going to be the event system. So what do you think an event system does? Of course, it keeps track of events, but that's not the only thing it does. The important thing is that events can be acted upon. And this system is also the base of the rest of the systems that we're going to see in this talk. In essence, this is mainly a Serium Q pop interface. The important thing to understand here is that every event has a tag which allows for quick filtering and identifying an event and also has an amount of arbitrary data inside of it which tells us information about the event. So with just a simple command, run in the master, we can already start watching for events, start watching what's going on. We can also use this other command to send a random event that we are just making it up. You can see that this would be the tag. Those would be the data of the event. The data is mainly a JSON string. In Python, it would be a dictionary because in fact you can also send events from Python code, from pure Python code. And if we did things right after sending the event, this should show up if we were watching attentively to the event bus. We can see that there's our tag and there's our data. Okay. Now, another interesting bit that I didn't get to make the distinction last year. There are two kinds of modules, state modules actually. The first one is the execution modules and the other one is the runner modules. So the execution modules is the main kind of state that you see in salt. It means something that is going to be run on the minions, whereas the runner modules are going to be run on the master. And these runner modules can be the synchronous or asynchronous. They are added via the runner directory's configuration in the master file. And that's the best part. What do we put inside that directory? Pure Python code. So runner modules are essentially... It's essentially Python code. And an addendum to this is we just talked about events. Any print statements that we put inside our runner modules will be converted to events. So if we do this inside a runner module, we will get something like this. See that, okay, the tag is not quite nice, but there's a data. So even though you can write runner modules and you're certainly welcome to do so, it is tempting, but there's actually no need. I mean, there's already a full list of runner modules available in salt in the documentation. So feel free to check those out. Now, wouldn't it be nice to live in a place like that? Sadly, we're not talking about those kind of beacons, but kind of. Salt beacons are like those concrete towers with the light bulb on top. They're also a kind of single. Or something like that. I mean, they use the event system to monitor things that are happening outside of salt. And when something happens to those things and notification is sent, which is actually an event. So those are configured via the Minions configuration file because we're actually interested in the Minions at this point. And any system administrators in the room, anyone? Does anything of this ring a bell? Something that is notifications? I notify, maybe? Okay. Yeah, I mean, I notify, which is a file system monitoring API to track changes on files and directories. Kind of looks like this. So in fact, there is an I-notify beacon, which is used to monitor changes to a certain kind of file, to a certain file, in a given time interval. And there you have it. Anytime the resolve.conf file changes, we now get an event. There's also other types of beacons, for example, a process. We can be monitoring whether or not a certain process with a process name we specify is running or not. If it's not running and it starts to run, we get an event. If it's running and it stops, we get an event. So kind of nice, right? There are actually several beacon types, memory disk, system load, network settings, the works. There are really a lot in there growing. It's just going to leave you, you can also write your own, of course, just going to leave you the documentation here so you can check it out later. Now, this is where things are going to get a little bit more interesting. Yeah, like that. It would be nice if actually the reactor was like this. Believe me, it's actually closed. So what is the salt reactor? As its name implies, the main job of the salt reactor is to react, but not react in a JavaScript way, thankfully, to react in a salt way or salty way. In other words, the reactor is the component that is responsible for triggering actions in response to events. So now you see why we saw the event bus earlier. Of course, we need the event system first, but what is an action? Since we're in salt, an action is essentially a state that we define. And what is actually going to happen in reality is something goes something like this. Something is going to happen if we think, right, there's going to be an event, maybe because this something was being monitored by a beacon or something else. And the event is going to be picked up by the reactor and the reactor is going to translate that event into an action or actually a state. Reactors are defined in the master's configuration file. It's a component of the salt master engine. As we said, the reactor will be making these associations. The associations, if you remember what an event was, you remember that it had a tag. So the association is made via the tag. So we put a tag in the configuration file and we define which states are going to cover that action. Syntax here is quite clear. Do note that there's an asterisk there. We can use wildcards because some events are fired by more than just one minion and have the minion ID in the tag. So for example, this first one here is the event of a minion starting up. So if we want to match all the minions starting up, we can just put the wildcard in the right place. So this whole slide is actually the main reason I'm here. It's the one thing I spend the most time while working with salt. So I ask you to please pair with me. There are a few caveats. The state system that we just saw here, those are states living inside the reactor. The state system is actually rather limited. And you can easily skip this while you're reading the documentation and trying out your reactor states. Trying to run things that would normally work in the rest of salt, in the rest of the states that you have might not work here. You will find the things are missing. And for starters, forget about Grainstone Pillar. Grainstone Pillar are not available in the reactor. If you choose to use those, you get unexpected results. Also, reactor states are actually processed sequentially. They're first rendered and the data is then sent to a worker pool. But since they're first processed sequentially, you're going to want to make your states as simple and small and as fast as possible. So after long hours of fighting over the reactor and tearing the little hair I have left on my head, this is the short version. Do not handle logic in your reactor states. This might be a bit too confusing because what's the point then? But I'm going to explain it in a bit more detail. You should use your reactor states for matching and then decide which minions to which states based on an event. And then just call your normal salt states that you have lying around. Do not try to add some logic here. You're going to spend a very long time and you won't be happy about it. So I don't know if this is actually true. It's what it looks like from the outside. It appears there's a disconnect because we're talking about two different engines, even if it's under the same demon. I like to think it's because of Python namespaces, but I could be wrong. But so too long didn't read, do not handle logic there. So as we said, with the reactor we are associating events to states, so if we have our custom events and we have our custom reactor state file, the idea is to keep it as simple as this. And if you really have to do complex things and ensure that many, many things are done when a given event is fired, just put those inside the long running and complex state. So once the reactor parses this and sends it to the worker pool, this will be running on the main salt namespace, so to speak. Okay, so what can we use a reactor for? One good example is out of accepting all the keys of all the minions in our environment. You know, it's quite a hassle every time you start a minion, you have to go to the master to accept the key and so on and so forth. So as you might have guessed, whenever a minion tries to authenticate an event is fired, and whenever a minion finishes starting up, there's another event. So for the purposes of this example, we are going to assume that all minions whose names start with nice are going to have their keys auto-accepted. So first of all, in the state that's going to be dealing with authentication, we'll first want to remove the keys coming from the minions that have failed to authenticate. The next step is going to be to trigger a minion restart. Now I know this is ugly, this is just for the purposes of examples. Every time I read SSH in the middle of another language, another configuration management system, I kind of creep out a bit, but this is just an example. What we want to do is have the minion re-authenticate, generate the new key, so to speak. So reaching the end of our big state, if we are in pending status of authentication pending status, and the name starts with nice, we accept the key. And as for the last state, when the minion finishes starting up, this is actually a good practice that you can implement. Whenever a minion finishes starting up, we apply a high state to that minion. This is something nice to ensure that all your minions are consistent at least when starting up. Now note here that we've been hard coding the nice, and maybe some other things around it. It's because, as we said before, we don't have the grains, we don't have the pillar. We don't have a safe way to store information, make it available to the reactor. So keep that in mind whenever you use the reactor. And our last component today is going to be the API. Of course, Solve has a REST API. The main idea behind it is to send commands to our running master. The API supports both encryption and authentication. The authentication, which is something that you might not see very usually in Solve, well, Solve has an external authentication system. It allows for authentication against L-Bab, against PAM. It also has access control in it. So it's really outside the scope of this talk. It's a very big thing to talk about, but it is worth mentioning that it actually exists. And the entire things that are managed by the API are controlled by another demo, the Solve API demo. So if it's a REST API, we can, of course, use anything that can make HTTP requests and get information from it, or send information to it. In this very short example, we are making a request to a certain URI for Minions and if we pass the correct Minion ID, we're going to start getting data about that Minion. In this case, for the sake of simplicity, we're not using authentication here. Now there are several API endpoints available, already bundled with this Solve API. They're pretty much self-explanatory, but let me draw your attention to one in particular, the slash hook. This is a special endpoint. It's a generic webhook entry point. And the whole reason for existing is that any post requests that are done here will be generated events on the master side, on the event bus. And the post data that we send to it is going to become the data of our generated event. Another important thing, because this is a special endpoint, it's the only one that Solve allows you to explicitly disable authentication in this particular part. Another thing is, if you're disabled authentication, it does not mean that you can do whatever you like. You're expected to implement some kind of security. Why would you be disabled authentication? Well, I like to think of apps that can barely perform an HTTP request, that can barely understand a URL, so they can only do a request with a special hard-coded token that you specify. So that's why we have that there. Now, how about from all the rush that we just been in it? How about we put them all together? It would be nice and friends. Now, I know you might be better here. You've seen a lot of information, and I think that you might be a little bit confused. But I assure you, we can do pretty interesting stuff, with all that we saw, the events, the beacons, the reactor, and the API. Now, more graphical understanding of how all this connects together, we first have the beacons and the API. The main interesting point about these two is that they're related to elements outside salt. The beacons monitor things outside salt, and the API, it's an API, so anybody can make a request to it. So they're both related to elements from the outside. Now, both of these two will be generating events in our event system, in our event bus. Those events can be later picked up by the reactor, given what we define inside the reactor, which then can be translated into salt states. Now, with the greatest possibility of having to manage your entire DevOps, and your entire workflow infrastructure, comes a great power. There's a deliberate reordering of the phrase here, because if you configure salt properly, you're going to have full control of everything in your infrastructure, in your workflow, everything, and from within salt. So as such, you're expected to know what you're doing, and you should always rely on a sensible way of doing things. For example, beware of the security risks. You might be tempted to, you know, keep way too much power to salt, and that's a good thing, but beware of somebody trying to do a ugly thing with it. So, to finish this off, let's take a minute, talk about what all you can do with all of this. I'll just be naming you a couple of examples from the top of my head, and I'll leave you to think the rest. That's because that's what salt is. Salt is kind of like a batteries-included approach to give you the space to create your own solutions, much like Python is, which is why I love salt. So just to name an example, let's talk about self-healing. Anybody knows what self-healing is, what it consists of? Anybody heard the term? Okay, so in more humane words, self-healing is the ability that we give our applications or systems the ability to repair themselves whenever something bad has happened, whenever they encounter an adverse situation on their own. That's the thing, that's why it's called self-healing. Now, all this might just be a REST API call-away, because if in your application you can identify that the bad thing that has happened can be corrected by something that can be automated, you can do it with an API call, because salt can't have control of that. Or another example, and I think many of you have encountered this, let's say half your team refuses to use Jenkins or the CI tool that you're using, fear not, because you can leave them be whatever they are using and integrate the rest of the push-build-test-deploy-endless-CI cycle with salt. You can manage it with salt too. Another example, if you were talking about scaling both up or down or sideways, growing, shrinking, you can prepare for it with salt. And you can also trust in salt to do some provisioning. We have encoded here, but salt also has a salt cloud demon to provision cloud instances. And last but not least, with a good beacon setup, you can make sure that your environments are consistent. If you have things that aren't supposed to change, and you suspect that somebody tends to do nasty things, with the beacons you can react immediately upon any changes that your team unwanted. And so these are mainly all the examples that I could think of with the short time that I was given, as I said before. I really do hope that you can leverage what you saw here to come up with your own solutions, because I'm sure that your problems might be worse than what I simply presented him. So, as for the docs, and as for last year, all the documentation is in the official Salt Stack documentation. I really encourage you to take a read. If you have any particular questions, there's also the possibility of bothering the guys at the Salt Free Note channel in RSE. I do that a lot. And we're reaching the end, so now we have time for some questions, so feel free to shoot away. APPLAUSE Can you compare Salt with Ansible? I'm going to be honest with you, I haven't used Ansible. I know that it's my, maybe, has a more basic approach. What I've been told from the people that have tried both is that Ansible lacks some components that Salt has, like the reactor, for example. So it goes around those lines. I was wondering how one could use Salt as a deployment tool. Is it feasible to deploy a complete application with it, or is it just well-fit to set up the system and then you need to revert to a proper deployment tool to deploy your application? For application deployments, right? Yeah, web applications, so set up a database, do something, put some basic data in it, deploy your Django application, set up the web server, and things like that. Yeah, I mean, maybe that was covered in the previous talk, which is more basic, but no, no, no, that's a very good question. And yeah, you can do it. You might have to handle a slightly more manual approach in order to tailor it for your environment, but you can certainly do it. And if you're thinking of doing some bare-metal provisioning, you can also do that, not exactly with Salt on its own, but Salt has form and integration. Form and it's provisioning software that was mainly written for Puppet, but now has Salt integration, so you can do the whole cycle from it. Hello. I understood that the communication channel is SSH. Is it correct? Salt has a way of working with SSH, but it's not the main way that... What I want to ask is how do you handle minions running Windows? That's a very good question, actually. It is possible, but I'm sorry, I've never had to do it. I have been a little bit playing around with that and found a way to insert a SSH daemon in Windows using SugWin that has an SSH. It seems to be working. I'm just curious if there are other options. Yeah. As long as you are aware of any limitations that you might have, it should work. The rest of the system is shared and it's the same. Thanks for the great talk. I'm using Salt for three months. It's really cool. There's also one stuff where engines are getting some additional events. Currently, I'm looking for a way to make pillars dynamic. I want pillars to get information from console or from at-cd during deployment to get some key information from key value storage external. Is it possible? I would have to look that up. Not entirely sure, but everything appears to be extensible in Salt, so I don't see why not. Yeah. Maybe it is. Hi. How do you upgrade Salt without SSH? Or is there any good approach to do that? Upgrading Salt without SSH, you mean the master? Master Minion. I don't think I understood where you were going with the question. How do you upgrade the Salt and install a new version of Salt on the current machine? You need to have access to the system. You need to have access to install the new version and you also have to restart the asian. Actually, that's one thing that is still not handled very well in Salt, is restarting Minions whenever there's an upgrade, because you can't do it from inside the master. Because you're going to be losing communication for a bit. Yeah. It's kind of a tricky spot still. Regarding the question about dynamic pillar, I think it's possible. Salt has a mechanism to get a pillar from external services. You can implement the Python model to... There is a plugin called ReBLAS, which uses that to make the pillar more usable. In fact, the question is, how do you test your states? We have several breakages in production due to human error. I know. I don't have a production system. I use Salt for personal uses. I don't have the luxury of working with the Salt environment. But I know where you're going with it. It would be nice if you could have a development environment or QA to try things out. Because, yeah, once you've made a change to a state and Salt doesn't like it, it will blow up. So it's kind of tricky. You have to keep looking at the logs and be very careful what you changed. You're bound to have the last change that you made cause a problem if you see a problem. Hi. Is it... Yeah, it's working. How do you handle provisioning new servers and how do you handle your inventory of servers? Well, from the... Okay, let's answer the first question. Provisioning is done in two parts. We're talking about bare metal provisioning. You have to use something like Forman that allows you to boot a system and then apply Salt states to it. So Salt is like puppet in that way and it's not like Ansible in that way. It doesn't have the ability to provision a system from bare metal from the ground up. When the system is already installed and has a minion running, you can do whatever you want with it. As for the catalog, the inventory from the perspective of a master, all the master sees are minions. So it is up to you to group them using node groups or grains or whatever you deem to be necessary. You would be basically setting your categories on your own, building node groups, setting grains on certain minions to identify them from the perspective of the master, but essentially there is no way of distinction. In fact, when we talked about the syndic node, the master of masters will see all minions connected to it, even those from lower level syndic nodes. So this is in response to the question about testing the Salt states. We do this. We will use Vagrant on your local machines with a masterless minion setup and then spin up in number of VMs and actually test the states, at least to some minimal, so that we can catch human error like that because we have the same problem. We deploy across hundreds of machines simultaneously in one error, it can really mess up your day. So I've tried with Vagrant locally. It works pretty well because you can spin up different kinds of VMs. We use FreeBSD or Ubuntu or CentOS and you can simulate a lot of those environments easily. Interesting. Thank you. Hi, thanks for the talk. Thank you. I'm seeing a pattern here. I'm seeing that most of the questions we are asking about Salt, we think that Salt can do, can be done perfectly using Ansible, like initial system configuration or maybe someone asked about service management. I know you already said you haven't used Ansible, but have you heard about someone using Ansible and Salt together? No, pretty much when somebody chooses a configuration management system, they like to stick to it. It has to do with the learning curve and all that stuff, so it would be harder. From the very few things that I've seen from Ansible, it's quite different from Salt, even though both are written in Python, even though both use YAML, they're quite different. So every organization wants to choose one technology and stick to it. But they seem to like to use it. That's interesting. Ansible can be used for the bare metal provisioning part and solved maybe for the rest or solved for the reactors. You could certainly mix those two. My goal is to use Fabric to install a brick that is just a provision from my method to create a richer machine to install Saltmaster on it and to configure it and then to apply high state. Are we talking about the Fabric Python module? Yes, right. The one that comes from Paramico? It uses SSH to... Yeah, yeah, right. It uses SSH to make a basic provisioning to start a richer machine, to put Saltmaster on it and then use Saltmaster and full power of salt. Yeah, you can also do that. We have time for one more question. Or not. Thank you guys. Thank you guys.
Juan Manuel Santos - Salting things up in the DevOps' World: things just got real SaltStack is a thriving configuration management system written in Python that leverages YAML and Jinja2 which, by now, probably needs no introduction. This talk will explore Salt beyond the minimum required setup, targeting developers/sysadmins already using Salt, and those considering making the switch from other systems but wishing to dive deeper first. Attendees should be familiar with configuration management systems and practices, and comfortable using and reading YAML and Jinja syntax. ----- There is much more to Salt than the basics. This talk will go beyond the minimum required setup and will take a look at Salt under the hood, which will appeal not only to system administrators, but will also be more interesting to developers and to the DevOps community in general as the talk progresses. Topics include: * Introduction and basics review (master/minions, matching, grains, pillar) * Salt Mine * Syndic node * State modules vs. runner modules * The Reactor * The Event System * Salt Beacons * Salt API Attendees should be familiar with configuration management systems and practices, and also feel comfortable using and reading YAML and Jinja syntax. This talk is targeted to developers or sysadmins already using Salt, and to those who are considering switching to it from other systems but wish to dive deeper before making that decision. After the talk, attendees will have a better grasp of the more advanced possibilities that Salt brings, and be ready to apply them to their use cases.
10.5446/21169 (DOI)
So without further ado, please give a big hand to our next speaker who will be talking about encrypted email. Hi, I guess you can hear me right. Welcome. Buenas, Egonom. I guess you are not in the wrong talk. We are here to talk about encrypted privacy-oriented services, especially email. I'm going to confer something. I just finished the slides ten minutes ago. So this is my practice. A little bit of history. We are in a very interesting place. We are going to talk about old protocols. I'd like to introduce some history about the places we are in. This was a shipping company, a shipping factory that started in 900 and ended in 1984. When I discovered that on Wikipedia, I said, well, this is a really good omen to talk about privacy. In the riots with the police, one person, one worker was killed here. By the way, the slides are in here. Many of those things are linked because I don't want to get too technical. I am here more interested about talking about tools. This is the Karol crane out there. We have the whole year. We have IRC. We have mailing lists to discuss about technical details. Actually, I'm merging two talks. Holger Krekel, I guess some of you know it. I just met him two months ago. He was going to give another talk about two words secure email. For personal reasons, he couldn't make it. I decided to put together some of the conversations we were having and try to merge the talks. I'm intending to do two different parts. One more about high level philosophical questions, if you want. And strategy, because we are a community that builds tools. And the other part is about actual tools. So I work in something called lib encryption access project. We gathered four years ago to decide to make encryption accessible. My role in the team is this. I'm probably not the best person to be here giving this talk, but I was just passing around Europe and nobody else could come. So forgive me for my ignorance. This is my first talk in kind of a serious manner. I started doing Python 10 years ago, but this is the first time I'm actually trying to present something to the world. So, what do we want to do? We want to make privacy usable at all levels. And the motto is we kind of feel that we have to defend the right to whisper. Because privacy is about the right to whisper. Some of the really smart guys that started this project are coming from these kind of collectives. Someone here has a rise up account. Good. A rise up is a tech support collective that gives support for activists. It's like the GMA for social movements. And this is a problem. Because when we start centralizing things, we have a single point of failure. But we are a non-profit. We are something more than a non-profit. We are kind of a distributed network of people that think alike, that wanted to do something in some specific way. And just we look for the way of getting money to do it. By using grants, by using research projects. But we are more people, than the people being paid by the particular project. And this is very interesting. Because it frees you from the startup mindset. So, I probably, since I knew I was going to be very nervous, I probably took the tips for speakers to literally. But I kind of found it fun. So, I'm going to present an adventure in which we meet the non-heroes, I'd say the anti-heroes. That go on a quest, find some weapons, you can guess which kind of weapons we use. We met some allies in the road. We, probably this is the only thing important from this talk, the monsters we are finding. Because we are kind of learning in their way. And the adventures yet to come. And my goal here is to convince you that this is important and interesting. And we'd like to have your feedback. Disclaimer. Leap is a highly opinionated project. With a highly opinionated team that builds highly opinionated tools. And this talk is given by a highly opinionated person. So, don't take me too seriously. When I say something is bullshit, just take it as a shortcut for, this is what I think. But I like to, yeah, you know. So, now you know the team. I'd like to mention that we are not just coders. We tend to forget about the other people in the teams that make that possible. So, kudos to the other people that are not cis admin or coders. We have one woman that the only thing she does is trying to get money through funding research for allowing us to keep coding. And that's much appreciated. So, the quest. Already said that. I guess if you are here is because you are interested in privacy. And so probably it's obvious in this context that privacy is not for privacy minded persons. We cannot think that privacy is something fundamental. Privacy and communications are a fundamental human right. And it's about the right to whisper. Privacy as the cyberpunks, cypherpunks said is the right to choose who I communicate with. And we think that we need to be able to choose who we communicate with when talking with our friends. By the way, this is a very interesting link. In case you don't know it, just click on it and read something tonight. This is the typical thing. Like, okay, we need to do privacy oriented tools for journalists because they have to keep secrets, what sources and so on. Our saints, the whistleblowers kind of appreciated in the community. And we everybody understand that they need privacy and secrecy. You probably work in a startup environment. If you are in China doing some wonderful research for selling a big thing, probably you want secrecy. Communicating with your CEO to avoid all the Chinese industrially Spanish. Or maybe you are thinking about changing jobs and you want to communicate with another CEO and being able to blast your salary away. How many people here knows this guy? This guy was the one that hacking team. Okay. Here you have the whole tutorial about how he did this thing. Let's say I'm interested in interviewing Phineas Fisher. He's probably the most wanted hacker right now that is not in jail. So probably the only way to communicate with him is going to be GPG. How many persons here actively use openPGP encrypted email? Good. Now I understand a bit more where we are. Probably I need secrecy to communicate with my lawyer, with my package maintainers. Thank you guys. But yeah, seriously, when I'm traveling in India, I really, really, really would like to or need to, my mom being able to understand what PGP signing a mail means. Because if not, whenever I'm going to be kidnapped, because I'm a little white guy with a credit card in my pocket, it's going to scam her for money. So in general, the whole society, the point is that the whole society would need to, if not understand, at least being able to use the magical trickery that cryptography gives us. And we have fucking failed to do that for the last 30 years. But you get the point by now. We need, our friends use the whole society. If without privacy, the whole society cannot work. You probably, yeah, I think you probably remember the crypto wars some years ago. Now we have a very much interesting movie. For those who don't go to the countryside, this is a silo. A silo is something where you put the grain and from there you get the cookies in the supermarket. So we are now in the silo world. And this is a very interesting moment to be. This is from a Tim Berners-Lee article some years ago. You can see that the cool things were, weren't so cool at the end. Some of them were, some of them died. Ha, ha. Smart guys. In Dante Alighieri Divine Comedy there is a very big explanation about the layers of hell. Well, it's an interesting thing with a historical value. We now have a special place in hell, in the technological hell we all live in, for the people that use GPG. And I'm not trying to be smart or metaphorical here. This is fucking real. In my surveillance device, I need to have at least four, five different apps to communicate with different kind of friends. I don't know if this is the right order, but you get the idea. Some people think that seeing Alistar is totally secure. Thanks, Moxie. We can discuss about Federation. Some people think that WhatsApp is secure because it has end-to-end encryption. Some people, I don't know why, think Telegram is a cool thing to have. It's kind of open source-ish thing. But you know what? This is complete bullshit. This is my run minute. It's unacceptable that if I want to get Raspberry Pi from some nice guys in there, I have to get a Twitter account. No, no, no. Twitter is not a tool for communication. Twitter is not a fucking protocol. It's a fucking company. You get the point. This guy, Michael Hayden, former CIA director. The most important fact in the last years probably has been, for my bias view, this one. Metadata kills people. And it is not a bunch of nerdies that says that. This is the important thing. You were called paranoid five years ago. Now, it's not that we think they do it. It's that they fucking say it. So we have a nice pun on the concept of the killer app. They are actually, metadata is actually killing people. But in some sense, we all want to have killer applications, killer libraries, killer operative systems, whatever. And we are all here selling things or recruiting people. And the key, this is from a Snyder book. The key to being in this place is that the things we do in the clouds, the internet or whatever are convenient and free. Free as in beer, mostly. We believe kind of in free as in freedom, but whatever. All the whole open sources thing. And this is a race, my friends. If we want users to use things, we have to do convenient things and kind of free things. So it's like fighting the enemy with their own weapons. And this is the holy grail for encryption and privacy and all that. We all are kind of looking for the thing that does the right thing, the right manner, without the user needing to do a fucking PhD. So we use your tool. And your tool might be many things. Your tool might be infrastructure, forces admin. How many people here maintains mail servers? So you know the pain, my friends. Things need to be usable also for developers. I'm really amazed by learning so many things about how to make properly usable interfaces for libraries. And at the same time, in the bottom layer of hell, we have the end users because we are highly opinionated and we tell them what they should think like. So this is what Leap project and its many branches and heads try to do to attack the hard problems and the interesting problems at many levels, making things so simple that you cannot screw it. I'm not going to talk about the system in part because that is mostly written in Ruby. No, just because. This is called the Leap platform and it's for sysadmins to install systems with properly configured defaults and so on. For mail, we also do DPM but I'm going to focus on mail here. We're kind of presenting some libraries. I'll get there in second part of the talk and we have some desktop applications for users. Intermission. Usually people will get out of the talk at this point saying, ah, but the user doesn't care. The user doesn't care because we don't make them think that it is possible. We are kind of shaping their view of the world and what is possible for them and not. We also think that the user is not going to pay but probably the problem is not in the user. Probably the problem is ourselves. I think it was Tanker that wrote a very nice postmortem on the whiteout thing with secure email and they basically were putting numbers on how hard it is to monetize the market for privacy. But it exists. People is willing to pay. After Snowden, governments, universities, like whole sets of huge amounts of people were willing to put money on secure email. We can discuss what security means because they probably want to keep their private keys for the use their citizens or whatever. But the need was there and the tools were ready. So there is another thing. Come on guys, like commodityization goes on many layers. We can put the value on the services and let people earn money through billing for a fucking mail service and it can be as less as 50 cents a month. If you get 50 cents a month or one euro for two months for 100,000 people, you get some nice cash for some developers to code in a valley beach. So probably the model we want to go is to cooperate in a way that we are not only our fishes but building the tools for everyone to fish and be happy. So in the end we want crypto but we want roses too. This is my fundamental truth. I have been working with email for four years and I fucking don't know what email is or why people don't use it. This kind of puts things in perspective. We think that Twitter or Facebook are the big silos but they are just a tiny spot there on the whole volume of email communication and that is only a small subset of spoken language. So come on, it's not going to die anytime soon. I'm going to skip this. If you are the kind of person that don't take this as a fundamental truth, you have statistics and surveys and you can see the data. So the weapons. We kind of brought some weapons from our previous, some people have been like 20 years in this business. And for the client parts and the synchronization parts we kind of choose Python because it was kind of obvious. We were very happy four years ago. We were told that all the hard work is done. The crypto is done. You need some glue code, blah blah blah. The shoulder giants. It is really true. And we have the crypto there. And crypto is very effective. It works. And we know that it works because in the leaks about NSA we know that there are two things that they really get mad at. Strong crypto and tor. So it works. It works for a bunch of nerds. But we cannot explain the whole things that are needed to properly use or to properly be in a kissing party to the persons. This is what Snowden made to make Greenwall able to have a fucking GPG key. It's an ugly 10 minutes video showing how to use GPG in Windows. It doesn't work. When you have a nerd doing usability studies and doing things for the public, it doesn't work. And at the end, this is how we verify things because we are fucking lazy. So what if I told you that we don't really need the users to understand the RSA concept that is awesome, but we don't need it. We probably can have just some layers that do the magic underneath. A very good study 16 years ago showed that the mental models that we have to study is ability in crypto are not valid. So we probably kind of have the criminals we deserve. So our plan four years ago in the happy moment of the relationship with the whole project was this. Very simple. Three points, glue code, everything nice. Oh boy, how long we were. So the thing here is getting GPG management easy and in a background manner and put it on the cloud because users have multiple devices and they want their GPG keys to be there. Put it on the cloud. But at the same time, we want to put them on the cloud in a manner that the FBI does cannot get them when they get a server. And it was like the later part. Okay, we just use the normal make clients. Simple, right? So we went on a quest and four years later, we have 10 Python packages that are some shit. This is a very good book. It says 2000 programmers, three years, four hundred seventy hundred bucks. We now have kind of eight thousand bucks in the fucking issue tracker. And we thought our project was simpler. So we do the management. I'm not going to talk too much about it here. The logic there is probably 20 lines. Just fetching keys from key servers. They are kind of broke. So we need to figure out a new model for sharing his. And trolling the web of tasks and all that. But yeah, key manager. Discover the key and that's the right thing. Trying to establish trust relationships between all keys and new keys and trying to get scores for how good a key is depending on its source. And we want to kind of share the common parts of it out of leap. So the nice part is what do you use for local storage to have your secrets always store locally in the client and in the server. So it was there. It was done. The only thing you have to do is to hack some setup.py script to do bindings for SQL Cipher. That's transparent as 256 encryption on top of SQL. Fine. Spaghetti is there. It works. And we have to merge the pie three four because we are fucking lazy. But it is there. It is usable for many other projects. So the big, the important part of the talk. It's called something something called solid that which is basically the idea of we manage the keys. We put them on a magical library that's the synchronization of data that has been locally encrypted in a way that the server can never tamper with it or infer anything useful about it. The design documents are there and the code is there. Security goals encryption in the client side and get the local storage has to be resistant. To online offline attacks and to that, the tampering on the server cause we have to assume the service malicious. Same goals consistency. Same flag. We don't want to single the data has to be multi platform. We fail at even think about mobile. We are on the desktop part. And these things are in the far future for now. Well, not so much for this. And for usability, we need something that is available so the user can always get his data. User needs to recover the secrets if they forget the password. And we want to have something general cause we also want to express this to things like having a pocket application or a to do application or whatever. So probably something of this sound similar to what the Ubuntu one guys were doing. So we said, hey, so super nice. And they had a library that was basically an abstraction layer to put JSON documents on a storage and sync it. And so we started using it and doing hacks on top of it. Now we kind of have fork. Although if the project gets to the life again, we probably can use the whole thing again. So we put couch on the server and we put SQL cipher on the client. We have another thing to be for metadata and a pool to do things with a key manager and new PG. The password never arrives the server cause we do something very smart. It's a cell knowledge thing called SRP. We derive keys to get stronger keys from smaller inputs and we basically do encrypted blocks and put them on the store and things. So this is the secrets. The blocks have is just a JSON document with a cipher text of the original thing you put in there. You create things using things. Only that. The thing for mail is that we have like the whole mechanism for mail to arrive on the traditional SNTP world, put it on your inbox, decrypt it, do the pieces and put them on the storage. So you can process your mail on one device and have your already seen inbox in many other devices as long as your GPG keys. And you have a very simple rest API to sync. Allies. We kind of trust on Thunderbird. We wrote a Thunderbird plugin. We have a desktop client that exposes iMac and SNTP proxies locally. Thanks to all this, this server was kind of easy and nice to do. Thanks to all these people. And we kind of started collaborating with ThoughtWorks because they said, oh, this is very nice encrypted replication of data so we can put them on a server. This is our client. And this would be the mail user agent. This is the server part with the code GDP blogs. And what the pixelated project is doing is putting all this in a server and serving a Python user agent that does the webmail. So we can put our clients in their server, but we also close the loop and we take the webmail and put it in our local client. And you can do the two things. The corporate mode in which the private keys are in the server or you can use it shipping it inside a desktop only application. And look like this and people is really excited about this kind of Gmail-ish things that do all the right magic in the background. Monsters. My biggest regret is not having dealt with complexity before. And that probably comes from our relatively unexperienced with big projects and Python and packaging and so on. We start having too many packages. It's fucking unacceptable. New customers find it very difficult to understand where each thing is. When you start overloading inheritance things get crazy very quickly. We also have some kind of complaints about the whole twisted deferred thing for newcomers, which is kind of a religious war. But right now it is very nicely isolated so people can just use the REST API and forget about the things that are happening in the background. Another thing that has delayed us a lot is trying to get in the client server thing, your local demo, mixing together the QT paradigm with its event loop, the twisted iMap server, and some other things. Simplify. We are at that point now trying to simplify it. The thing works. The thing has tests. People is contributed. We have a big company like ThoughtWorks contributing code, but we need to lower the barrier to do a significant contribution to the project in general. And some adventures are ahead. Part of the team now is working in a couple of European Union research grants that have to do with key server validation, key exchange, and trying to bridge, trying to tend some bridges across different privacy projects. We want to share some of the knowledge and even code. And Panoramics is another cool project about doing mixed master networks for privacy. We are one of the first clients to implement a new draft, a new standard proposal, which is called Memory Hole, that tries to attack the design error in mail, in a centipede, that all the data, all the headers are going in clear text. Last week we were at the OpenPGP summit, and it seems that Thunderbird has already implemented this. So the whole idea is that you put all the headers, you put them inside the encrypted and signed mail, and you replace the headers with dummy steps. So you have a very nice and simple way of protecting the mail while in transit. There are also some nice proposals to do forward secrecy in OpenPGP, doing something quite similar to what the signal protocol is using, this ratcheting mechanism, to avoid that if an attacker can get some of your keys, it cannot recover the whole... It is a story, it is there, so trying to break the reconstruction of the whole communications. This is probably going to be something really, really exciting in the next year. And in Soledad, we are in a point where we are not finding any important bugs now, but we need to do some things for a scale. One of the main things we are going to do in the next months is trying to break the atomicity of the syncs, because right now everything is the same pool, and that's kind of shitty. We want to be able to sync all the keys first in a new device, and then probably all the headers, and then probably the attachments on demand. We have to deal with event-to-unconsciousness in a nice manner. There's a lot of things that need to be done, and that's it. This thing is my fingerprint, and for the young people, this is not a Twitter handle. This is something called IRC. We are there, and I'd be very, very happy to talk to you guys and learn all the possible things you can communicate with me. So thanks for an absolutely fascinating talk. We have managed to leave a full ten minutes for questions. In fact, almost eleven, which I think is a great idea. I have a million questions, but you don't want to hear my voice, so I hope you have the same questions in your heads that I have in mind. Who would like to have the first one? Thank you very much for the talk. Do you have any idea how to deal with a big amount of encrypted email, how to search in it? What to do with a start-away encrypted email when the keys elapse? What to do in the long run? What we are doing now is that we, it's different in this case. In the original BitMask client, what we do is that we don't store the encrypted blocks anymore. We use the very nice and old code in the standard library for, it has like ten years ago, ten years code that pieces, cast the pieces of the MIME 3 and we store all the metadata in different documents in the document store, and we delete the original encrypted block. So we can do efficient search mainly by headers. We can build indexes for searching for the main things in headers. Pixelated project is using a different approach. They are using push, I think it's pronounced like this, and they do full text search on the whole body of the mail. And they store, what they do is that they build the index for doing full text search locally, and they encrypt the index and they store the key for this block inside Soledad. So you have locally quite nice and efficient index to do any kind of, we probably could do the same with NodeMatch, just trying to block and encrypt the things with NodeMatch and just store the keys for the encryption inside the metadata database. More questions. I came across an encrypted email client called MailPile, which sounds kind of simpler but less ambitious. Can you compare the two? Because I felt you were trying to get done in time, which was great. But when you started saying here, let's have a client in the server and a server in the client, I was lost. I need probably better diagrams for that. We are really similar to MailPile. Actually, the Pixelated project started to consider using MailPile for the frontend. The things that we, I personally found MailPile kind of monolithic and we tried to decouple the things to play nice with the whole provider infrastructure. So MailPile is probably doing this whole thing in the client. They do their web server thingy, they do all the handling for GPG and they do the storage. It's basically the same thing. It's a webmail with encrypted local storage. They don't attack this, as far as I remember, they don't attack the replicability problem. So we kind of started from the upper layer and this is a very hard constraint. I got MailPile running with my agent. So at the end, I don't care. We kind of focus on this layer and the user agent should be pluggable. MailPile is really, really nice and for the amount of funding they got, they are really far in terms of features. So I really would like to plug it into the whole Solidarn Encrypted Storage. Good. Yes, the main difference is replication. That's great. Okay, any more questions? Yes. So one of the things I've been struggling with is can we really escape it? I mean, even if we use something encrypted, people will communicate with us Gmail. Even if we use encryption, if it's on the phone, I mean that's either Android, I mean it's either Google or Apple. So can we really escape it? Or is the only way to communicate securely is just, you know, low tech, just meeting people? That's the only thing I have an answer for that. It is totally true. Our code now, haha, I kind of screwed it. We had code in GitHub and we are moving to another GitHub instance in which a requirement is that you have an email that is not from one of the big mail providers. As you say, it doesn't make any sense if you are hosting a mailing list server with some pretensions of privacy and one only person in the mailing list has a Gmail account, end of the game. Have you read Moxie Marlin Spike's recent rant about the end of Federation? It's a very interesting discussion going on in the community. Because he thinks that having central control allows you to reach a lot of people, like deploying something. My personal impression is that that is not the main goal. Like Federation doesn't really need to mean that you are sharing, isn't it the open source problem or the creative problem in an abstract way? We are open, but the big corporations are less open. So they are open to our things, our code, our conversations or whatever. They can do data crunching, they can do data mining on it, they can make money out of it and they return zero. I really started being scared about the Federation thing when Google closed down the XMPP endpoints. Because that means they are fucking going to kill all the interoperability. And for me, mail is important because it is like the last place or common language. It is the only universal identity anchor. If we lose that, we are screwed. More and more, with Facebook, it is going to move towards GSM identity pieces. So I don't know, hard question. I want you to believe. You said GSM identity pieces? What was that? What I am meaning is that right now for all the social networking silos, mail is the identity provider. It is your reset for everything. But more and more, if you look at the peripheries of the capitalist system, they are starting. New people, younger people, doesn't have an email. They only open an email for opening their service account and they forget the password for the email. And more and more, I am seeing this trend about the GSM SIM, the chip, being your identity anchor. They can build about it. So are you saying that email is the only right way and there is no future for protocols like signal or something like that? I am not saying that. Mail sucks. Mail actually is for spam. And we are going to have a big problem about spam if we encrypt all the metadata and for work and for university and whatever. But this is the pragmatic approach. I am really excited about things like POM that have POM. It is a project I think by Adam Landlitz, which maybe I am mistaken. But it is like an experiment towards a new messaging protocol with security considerations from the beginning. But right now, it is something that 10 nerds are using. My point with this being strategically important is that mail is going to be there for a while. And we cannot wait. There are situations in the world, like, I don't know, you are running an abortion network in Malaysia. You cannot wait 10 years until the hackers can't with the right tool. So consider all this crap. I have been talking about some transitional strategy until we kind of have some decent protocol in place. It has to be open and federated. And the thing you were talking about was POM. P-O-M-D. Thanks. Oh, questions. What's the time? We got time for about one more. Yes. Hello, hi. Hello, hi. I think one of the things you mentioned in your talk was really important. And it is around the question of why normal people either don't do it or can't do it. And it is around usability. And I think in the open source world we have lots and lots of people who are technically very good. We are enthusiastic about pushing the boundaries in terms of protocols and the cryptographic correctness and all the rest of it. And whether it is cryptography and email or whether it is something like LibreOffice, I think there is always a stumbling block which is the usability for normal people. And it is a shame. I mean, I am older than most people here. I have seen over the years great ideas which are intrinsically excellent fail because Granny Smith didn't understand the word or the icon looked wrong. Something that was relatively easy to fix. So my question is to what extent are you doing the user research, the user testing with normal people to say, actually we don't need to put the effort into our key replication or whatever it is because the thing that is stopping people is something else completely as observed by a more disciplined understanding of how users interact with what you are building. Let me search for one little thing. I didn't have time to get into it. This is absolutely important and usually it is not hard coded into the processes of the groups because we have the, I'd say, the engineering bias. We think we know. We think we are gods. We think the users are fucking stupid. And that's a very wrong... I'm generalizing. Just trying to be funny, but that's a very wrong approach and we don't realize about it. I've been talking kind of about mail, but the first part of the project we were trying to solve another different project problem which was secure VPM. The idea was having a Trojan horse because users are not going to install a desktop client for email, but they probably won't a desktop client if it is the only way to get VPM. So we spend... now we have our regrets, I have to say, but we spend some time trying to solve the other project, the other problem about VPM, cross platform and so on. And if you don't know this blog, just subscribe it. Subscribe to it. This is Gas Andrews and he gave some workshops about disability studies in a very scientific way and came up with a long list of very interesting things that need to be changed because our mental model, basically, for how the user understand and reacts to the application was not optimized. We fucking need more of these things earlier, during, afterwards. I cannot mention that one of the challenges right now is to close the feedback loop with these kind of things in a faster way. Okay, that's it. Thank you very much. Huge hand for this wonderful...
Kali Kaneko - Against the silos: usable encrypted email & the quest for privacy-aware services At the LEAP Encryption Access Project we aim to make secure communications both easy to use and easy to provide. We bring some tales (and some, hopefully, tools) from the quest for user-friendly crypto software. How to make people love the email experience in the 21st century, without risking their privacy. How to encrypt data locally, sync it to servers that you can lose, and still be sexy. ----- Technologies that allow for privacy in the communications, allowing the escape from the pervasive massive surveillance, have been there for some years now, but yet its use by the general public is far from widespread. The challenge, in our view, can be defined by one of making usable crypto. Usable for the end user, usable for the sysadmin and for the fellow application developer. In the quest for massive adoption of encryption technologies, we've been forging several python packages to solve different problems, always standing in the shoulders of giants. We bring some tales from the trenches to share, from our humble experience trying to deploy clients and servers to provide Secured Encrypted Internet Tunnels and Encrypted Email. This includes interesting challenges dealing with key management, automatic and secure software updates, and processing of email while using stock cloud providers, while still being resistant to hostile environments. We'll show a webmail email user agent based on this architecture, a promising future for decentralization and privacy. We'll also talk about how to store locally encrypted data, and will present Soledad (Synchronization of Locally Encrypted Data Across Devices). Soledad is a library with server and client components that allows the development of different applications based on client-side, end-to-end and cloud-syncable encryption of private data. We'll play with some toy apps to showcase its features and potential.
10.5446/21173 (DOI)
Okay, let's welcome Larry Hastings, aka DS Dad, for the removal of the gill. Alright, yes, this is a talk about the galectomy, which is a project to remove the gill from CPython. Let me preface my comments by saying this talk is going to be exceedingly technical. I'm just going to go right into the heart of the matter, and so it's kind of designed for people who are already core developers who are familiar with the internal CPython. I'm hoping that you'll understand multithreading pretty well. I'm hoping that you'll know at least a vague understanding of how CPython works internally, the concept of objects and the reference counts on objects. Also, if you don't understand this stuff, a good thing to do would be to watch my talk from last year. I didn't give it here, but I gave it at other talk. No, I did give it here, actually. It's called Python Sym from the skill. It's on YouTube, and it'd be really good if you could go back in time and watch it before you came in the door. Anyway, so let's talk about the gill. The gill was added in 1992, and barring the addition of a condition variable to enforce fairness, it has remained essentially unchanged in the 24 years since then. Now I want to make it clear, the gill is a wonderful design. It really solved the problem. It was a fabulous design for 1992. It really still holds up today in a lot of ways, but there are some ramifications from this design. So first of all, the gill is very simple, so it's really easy to get right. C extension developers don't have any trouble understanding how to use the gill. Internally we would never have any problem about owning the gill or not owning the gill when we're supposed to or not supposed to. Since there's only one gill, we can't have deadlocks. There's only one lock, you can't have a deadlock between the more than one lock. And since you only ever run in with a single thread, there is almost no overhead of switching between the gill. We do it only when we switch threads. So your code goes real fast. The gill adds very little overhead to your code. Now if you're single threaded, your code is going to run really fast. This is a really great design for single threaded code. If you're IO bound and multi threaded, this works great. And this was actually the original design for what threading was for back when all computers were a single processor anywhere, almost all of them. The problem is when you want to have a CPU bound program and you want to run multiple cores simultaneously because you just can't. And that is the pain point of the gill. So again, in 1992, all the computers around us were all single core, even the big servers. But the world has changed since 1992. These days we have these wonderful laptops which are multi core. And even our phones are multi core. And our wrist watches and our eyeglasses have all gone multi core. I have a workstation at home that has 28 cores in it. If you count hyper threading, it has 56 cores. We live in a deeply multi core world and Python is kind of ill appraised to take advantage of that. I want to point out this comment is still in the CPython source code today. CPython has only rudimentary thread support. I suggest that maybe it is time to consider adding more sophisticated threading support to CPython. After all, the goal of a programming language should be to expose all of the various things that your computer can do to you, take advantage of all of the different resources your computer offers. And Python can use all of them except for the multiple cores that you have. So it is kind of a sore point. Now there was an attempt back in the 90s, this thing called the free threading patch. This was an attempt in Python 1.4 done in 1999 to get rid of the gil. It didn't require changing the API, so it didn't break C extensions, which is a good design. What it did is it moved most of the global variables inside the interpreter into the single structure and added a single mutex lock around Inker and Decker. I believe there was a Windows variant of it at the time that used interlocked increment and interlocked decrement, which is a win32 API that is equivalent to atomic Inker and Decker. But the single mutex lock was a little on the slow side. Your program would run between four and seven times slower, which let's be clear, what everyone wants, they want to get rid of the gil because they want to use multiple cores because they want their program to go faster. So when I say, oh, we removed the gil and it goes slower, nobody is excited. So this was not a very exciting patch at the time. If you want to read more about it, there was a lovely blog post by David Beasley a couple of years ago and got it to run on modern hardware. This is called an inside look at the gil removal patch of lore. I looked at that too. But let's talk about what I'm doing now. So the galectomy, I have a plan to remove the gil. And actually what I should say is that I have removed the gil. I removed the gil back in April. The problem is that it's terribly slow. But in order to remove the gil, you kind of need to have a plan in place. There are a bunch of considerations you must account for in order to remove the gil and have the project be successful and maybe get merged or used by people someday. So I say that there are four technical considerations you must address when you are going to remove the gil. Those are reference counting. Again, every object in the CPython runtime has a reference count that tracks the object lifetime. And this is traditionally kind of unfriendly to multi-factor approaches. There are global and static variables. There aren't nearly as many as I thought there were, but there are a couple. There's some per-thread information, which I think all lives in one place now. There's a bunch of shared singled-in objects, like all the small integers, like negative one through 16 or something like that. None, true, false, empty, tuple. These are all, Python creates one of them and every time you use an empty tuple, it uses the same empty tuple everywhere because it's immutable. You need to address extensions. C extensions currently run in this wonderful world where they don't have to worry about locking because the gil protects them. They've never run in a world where they can be called for multiple threads at the same time. They've certainly never run in a world where multiple threads could run in the same function at the same time. And so there's a lot of code that depends on only a single thread running in the function, like if static thing is equal to null, then initialize static thing. All that sort of code is just going to break when we go multi-core. And finally, we need to worry about the atomicity of operations in Python. So the developers of the other Python implementations, PyPy and more strongly, IronPython and Jython, they discover that a lot of C Python code implicitly expects a lot of operations to be atomic in C Python. If you append to a list or if you set a value in a dict, another thread could be examining that object and it must not see that dict or that list in an incomplete state. It needs to either see it before the append has happened or after the append has happened. So we need to guarantee that atomicity of operation that you can never see an object in an incomplete state from another thread. In addition to these four technical considerations, I say that there are three political considerations that we misaddress because it's not simply a technical solution. We need to also make sure that the C Python, like there's a whole world of people using C Python out there and there are demands that are going to be made on moving the guild that are not strictly technical demands. I say that these are. We need to not hurt single-threaded performance. This was something that Guido established in a blog post, which I'll talk about in a minute. But we need to not make single-threaded code slower. We not make multi-threaded IO bound code slower. That's a very high bar to meet. We need to not break C extensions. This is sort of my statement. C Python 3.0 broke every C extension out there and it's been however many years, five years, six years since C Python 3.0 came out and there are still plenty of extensions out there that haven't upgraded to the new extension API. We need to try and avoid breaking C extensions as much as possible. And finally, don't make it too complicated. And of course, this is a judgment call. But C Python, one of the things that's really lovely about C Python is that it's pretty easy to work on. Internally, it's not all that complicated. It's conceptually very simple. The code is very clean. And it would be a shame if we broke that feature of the C Python source code in order to get rid of the GIL. So let's try and preserve that. Now there are a couple of approaches that people have talked about, ways to get rid of the GIL that I don't think will work. And so just to sort of set the stage, I want to talk about those for a minute or two. There's what I call tracing garbage collection. This is also mark and sweep garbage collection. This would let us get rid of reference counting. And again, reference counting is traditionally very difficult to do in a multi-threaded environment. So this would be very favorable to multi-threading. Tracing garbage collection, it's not clear whether it would be faster or slower than reference counting would be traditional wisdom about this. Conventional wisdom says that garbage collection and good reference counting implementations are about the same speed. And then people like to argue, but that's the internet for you. Where this falls down is that this is going to break every C extension out there. It's a very different world going to pure garbage collection as opposed to reference counting. And so C extensions are just not going to work anymore. So that breaks every C extension. I say we kind of can't afford to do it politically. And it also would be very complicated. It's a much more complicated API than reference counting. Reference counting is a relatively simple API. And still people mess it up. It can be a little obscure at times. It can be a little hard to figure out exactly what the right thing is to do with reference counting. But garbage collection, I think, is going to be that much worse. Even more so than tracing garbage collections, what's called software transactional memory. Armin Rigo, who just showed up today, he's been working on software transactional memory as a research project with the PIPI interpreter for a couple of years now. And it sounds like a fantastic technology. Is it going to be fast enough? Yes, absolutely. If software transactional memory works, it's going to be really fast. It's going to be really great. It's going to take wonderful advantage of multiple cores. And you're going to have very little locking involved. But it really falls down on the other two. It's going to break every C extension out there horribly. It's going to be incredibly complicated internally. Also, again, this is research quality stuff right now. It's not clear to anybody when it's going to be ready for production. And I don't think that CPython is able to wait. So let's move on. Let's talk about my proposal and the specifics of my proposal. So again, I said there were those four technical considerations. The first is reference counting. What I say is we keep reference counting. That way we don't break C extensions. So it's going to be the same API that we have now, PyEncour and PyDecour. The important thing is that the compile time CPI does not change. Now, like I said, I got rid of the GIL in April. And what I did is I switched to what's called atomic anchor and decour. This is where the CPU itself provides you with an instruction that says, I can add one or subtract one to this memory address and do it in such a way that it's not possible to have a race condition within other core. Works great. Costs us 30% of speed right off the top. So this is working great. And this means that our programs are correct. But it's awfully slow. And we're going to look for alternate approaches here. Global and static variables kind of handle that on a case-by-case basis. Again, for all the per thread stuff has already been moved into PyPreadState for me. I guess I was done a couple of years ago. I hadn't noticed. So that's ready to go. Shared singletons. They just remain shared. All those shared objects like the small integers and none, excuse me, and true and false, those just get shared between threads. And that's the whole point of getting rid of the gil and running multiple threads is that see Python programs don't change. See extension parallelism and reentercity, there's just nothing for it. They're going to be running in a multi-core world. And they're going to be called multiple times from multiple threads simultaneously. And they just need to get rid of it, get with the program. So it's going to break the extensions all over the place. Atomicity of operations. We're just going to add a whole bunch of locks. Every object in see Python that is mutable will have a lock on it. And it will have to be locked while you're performing the mutable operation. So this is going to add a new locking API to see Python. They're going to macros, pylock, and pionlock. This is going to call a turn into the pytype object, which is going to have sprout in two new members oblock and obunlock, which I'm guessing will be exposed to Python programs as dunderlock and dunderunlock. All these functions, they only take one parameter, which is the object to lock or unlock, and they return void because they always work. In for objects that are immutable, my claim is that this oblock and obunlock, those can be null. So you either support locking or you don't, and if you don't support locking, you don't need the functions and you can just skip them. So what objects need locking? It's all mutable objects. When I say all mutable objects, I mean C mutable, not just Python mutable. For example, consider the stir object. From the Python perspective, stirs are immutable, right? But internally, they have a couple of lazily computed fields, like the hash. The hash is initially initialized to negative one, which, by the way, if you ever looked at the hash function, it says it will never return negative one. Negative one means uninitialized internally. So that's why a negative one value is in the legal hash value in Python. So it's initialized to negative one, and then the first time somebody says, give me the hash of this object, of the stir object, it goes and computes it, stores it, and returns that. So that's mutable state. Now in the case of the hash, that's harmless. If we had two threads, they both saw the negative one, they both compute the hash, they both override it. They're overriding with the same number, so that's harmless. But there are two more fields, utf8 and wster, both of these are also lazily computed. These are conversions to utf8 or utf16, respectively. And those allocate memory. And if there was a race where they saw null and they both all go off and allocate memory and they both overwrite, you're going to leak memory at that point. So we're going to have to put a lock around those. So the stir object is currently not safe, and I haven't dealt with it yet. So right now we can leak memory inside of CPython, it's terrible. So every object is going to be locked inside of the Galactomy, which means that we have to have as light a lock as possible. I would call this a user space lock. Under Linux, we have this wonderful thing called the Futex, which is literally a lock, you can declare any four byte aligned memory address is a lock and you can wait on it. It's really more of a building block for writing your own mutexes and other synchronization objects. It's really great. Windows for 20 years has had what they call the critical section, which is user space only until there is contention. And OS 10 has what they call pthreadmutex. A couple of people now have told me that pthreadmutex is guaranteed to be user space only until there is contention. So we have the user space locks that we need for all of the major platforms. I don't know about the other platforms, Solaris and FreeBSD and all those sorts of things. Somebody else is going to do that work. But all of the major platforms that Python runs on, we're going to have the support for user space locks that we need. Or maybe they don't get a no go Python. We'll see. Now, as for the political considerations, for my approach with the Galactomy, I would say that, yes, it's not going to be any slower and yes, it's not going to break C extensions. Now this may be crazy because I'm declaring, I just told you a couple of minutes ago, I was going to break every C extension out there because of atomicity of operations. And I'm making it 30% slower by adding atomic anchor and decker for reference counting. So how can those two statements be true at the same time? The answer is that I'd say that we have to just have two builds. So we would have Python built with the Gil and without the Gil. You build it with the Gil and everything is the same as it is today. And that way all the C extensions continue to work. That would be the default build for everybody on every platform. And then if you're some sort of futuristic person who wants to live in the multi-core world, you can build Python in the special no-Gil version at which point the PyLock and PyOnLock start to work. So these macros, these PyBegin, allow thread and handle thread, these PyLock and PyOnLock, those would either be no ops or active depending on which build you were in. If you had a Gil, then begin a lot of thread and handle thread did something and PyLock and PyOnLock would be no ops. If you don't have a Gil, then lock and unlock are going to do something and begin a little handle thread are probably no ops, although I may hide some work in there. This also means that a C extension will never accidentally run with the wrong build because we can have different entry points for each one. If you have a Gil, if you have a module just called module, then you have an entry point called init module. We could say, okay, if you run without Gil build version, then we're going to have a different entry point with a no-Gil in front or something, just to make them two different entry points. That way, no one will ever run in a no-Gil build accidentally, and it's strictly opt-in. No C extension will run in no-Gil build until they're ready, until they declare themselves they have no-Gil entry point. You might actually be able to build a single extension that worked in both, by the way. We could add, let's say other things are macros, we could add actual C functions for them. And if you were very careful, you might be able to write a single.so that supported both modes. I don't know if that's interesting or not. It's just something that I'm mentioning. As long as we're effectively declaring a new CAPI, because this is really what this is at this point, it's kind of a new CAPI. It looks very similar to the existing CAPI, but it has this reference counting that works a little differently, and the optimist of operations means you have to have locking all over the place, and you can't guarantee that you're going to only run on a single thread at a time. This might be a good time to start inflicting some best practices on people that currently are optional. It's actually true in CPython that you can declare your own type statically, and you can create an object with it, and you can pass it into the CPython runtime, and the CPython has never seen this object or this type before, and it has to work. We could stop allowing that. There's a function you're supposed to call called PyTypeReady that's optional, and we could say, okay, now it's required. But the same token, there's a new PEP called PEP489. This thing called multi-phase C extension initialization. I don't really understand it, but it has something to do with initializing C extensions, and I was like, well, that's very relevant. This might be a good time to say all these things that used to be optional now they're required. If you're going to run in a no-gill build, you have to call PyTypeReady, you have to call PEP489, you probably have to use the limited CAPI, all those sorts of things. This gulectomy idea falls down as the don't make it too complicated idea. It is getting a little complicated because we're effectively talking about two different builds running at the same time from the same source base. A CPython core developer would have to read the code and say, oh, PyLock, that's only active in the no-gill build. Py, begin allow threading, that's only active in the with-gill build. So they're going to have to read every bit of code twice to see how it's going to react in with-gill and without-gill. You're also going to have to be very careful about where you lock, but ultimately, this is the price we're going to have to pay in order to get rid of the guill. I don't see any simpler way of doing it. As I said, this is something I've been working on for a couple of months. I think I started in February. Initially, I was calling this whole thing ConfusaCat just to pick something from MontyPython. But then the name Glektomy came up and, like, well, that was done. That was the name. Now, as I mentioned, back in 2007, Guido wrote a blog post called It Isn't Easy to Remove the Guill, where he talks about what would have to happen in order to remove the guill. And I agree with everything in this paper that he wrote. It's all really insightful, except for the title. Turns out, if you know where to start, you can remove the guill in about a week. Here's how. Step one, step zero, really, atomic anchor and decker. You switch pie, anchor, and piedecker to use atomic anchor and decker. I only support 64, but Linux right now. So I just went to GCC and used the intrinsics. So I only support GCC right now. Number one, you have to pick what kind of lock you're going to use. Again, on Linux, I'm using Futex-based locks. There's a paper from Ulrich Drepper called Futex's are tricky, where he walks you through how to write a Mutex based on Futex's, and I'm basically using his design. Step two, you need to write, throw locks around the entire dict object. You cannot run a C Python interpreter without having a working dict, and so the dict object needs to be safe. So you just need to go through every external entry point, any place where someone is calling into the dict object from outside, you need to make sure that it's locked properly and unlocked properly. Step three, same thing with the list. Again, C Python uses dicts and lists internally for a lot of operations, and you just can't have a working C interpreter unless you've got both of those working. Step four, there are about 10 free lists inside of C Python, where when you allocate an object, it looks to see if there's a free one waiting, and if there is, it just uses that, and if there isn't, it has to go off to the allocator. These free lists make things go a little bit faster, but obviously they're not thread safe yet, so you need to add a lock around them. You need to do that about 10 times. Step five, you need to disable the garbage collector and GC track and untrack. The garbage collector is completely broken in the gillectomy right now. It's going to be quite a while before we get that working again. Which actually, by the way, makes my numbers look a little better than they really should, because there should be some garbage collection overhead that I don't have. But the garbage collector is just completely, totally broken in the gillectomy, and it's just completely shut off right now. Step six, you need to actually murder the gill. This was a pleasure when I got to do that part. There's just a structure, you just don't allocate that variable anymore, take all of the things that are switching the gill and just stub them out or comment them out or whatever, and they all go away. Step seven, there's a, when you switch threads with CPython internally, there is a thread state that's stored in a global variable, and everyone just refers to that. So whatever thread you're on, they just look in the same spot, and that's always the information about the current thread. And obviously, you can't do that anymore if you're running multiple threads simultaneously. So instead, every time that people refer to that, they're actually going through a macro, you just need to change the macro so that it pulls that thread state variable out of thread local storage. That was actually pretty easy to get working, because everyone's using the macro, so everyone's really good about it. And finally, you need to fix some tests. Specifically, there were only a couple of tests that really broke when I did this. Mainly they were sensitive to testing exactly how big the dict object and list object were, and now that I've added this lock to it, they had gotten a little bit bigger, and I just needed to fix those. And actually, the entire Python regression test suite started to work, apart from the stuff that was actually using threads, and there were a couple of those. So back at the language summit, I announced that it was about three and a half times slower by wall time, and about 25 times slower at CPU time. What I mean by that is, I was running a test. I run the test the same way every time. I run seven threads, all running the same program, and I time it. And I did it with normal CPython, and I did it with removing the gill, collect me CPython. And when I did that with seven cores, it was three and a half times slower if you would just watch the clock on the wall. But if you count up how much CPU time was used, well, I was using seven cores as opposed to normal CPython just using one core, and so you multiply that number by seven, and it was about 25. So 25 times slower to do the same amount of work, which is kind of depressing. So this is the official benchmark of the galactomy. This is what everyone has been running. It's a really bad Fibonacci generator. I'm showing you this just to impress on you how horrible the benchmarking is so far, how little code I can run through a multi-core CPython right now. But this does work, and I can run it on multiple cores simultaneously. It's not exercising very much code inside of CPython. It's looking up the Fib function over and over and over in the module dict. So there's a single dict that's just getting slammed with lookup requests, and since it's locked, that means there's just some contention around that lock. We're performing a function call, which has always got some... It's a pretty heavyweight operation in CPython. We're running a little bit of bytecode. We're talking to the small integers like two and one and zero, and actually all the small integers, because of the way Fib works. You use all those small integers a whole lot. Again, the small integers are shared between threads, and they all have reference counts, which means that we're changing those reference counts constantly from multiple threads, which is costing us a lot of performance, it turns out. And we're doing a little bit of math, and the math really isn't hurting us at all. So this is what it looks like. I got some flak for not labeling my axes, so there, I've labeled my axes. The vertical is time in seconds, horizontal is number of cores that are being used, and this is Gil versus Golectomy. So having the Gil is the blue line. It's way faster to have the Gil right now. And with the Golectomy, this shows you that it's taking... It seems to be curving off, so at some point it might actually go... It might not be making it that much slower to add a core to it, but that's going to be a way, way out. There's also this dip around four. I don't know why it's there. I think it's just the way that the tests interleaved. I would say ignore it, assume it's not there. I had to show it because that's what my data actually showed. But more interesting, again, this was really wall time. I think CPU time is more interesting. So the amount of time that it took to compute these seven Fiminochi numbers, it was Fib of 30, I think. In CPython, it's next to nothing. You compare that with running it with the Golectomy, and you just goes crazy up. So obviously, it's incredibly slower. How much slower? This is a graph of how many times slower it is per core comparing normal CPython to the Golectomy version. And again, there's this dip around four. I would say ignore it. But what this is telling us is that it's about twice as slow with one core, and then it shoots up to about 10 times slower with two cores, and then it just keeps going up and up and up. I think seven cores is about 19 times slower here. So why is it so slow? First of all, the Golectomy isn't changing that much code, or at least not yet. So the first thing I would say is that I don't know for certain. It's kind of hard to measure at the Sprint set at Pycon a couple months, I guess early June at that point. There were some Intel guys who hung out with me, and they ran it under Btune, and they kind of confirmed some suspicions here. The second thing is actually lock contention, and that's what everyone was probably assuming was number one, but it's actually number two. Number one is synchronization in cache misses. This is what's really slamming the Golectomy. Something consider is that nothing inside of C Python is private. So like a normal multi-core program you might write, you might design around being multi-core and you'd have, okay, here's this thing that's thread local to this one and thread local to that one. There's almost nothing in C Python that's thread local. Everything is shared across all cores all the time, and all the cores want to talk to them simultaneously, and that's kind of the fundamental thing that's killing performance, is that we really don't have any thread specific stuff. So let's talk for a minute about why things are slow and fast. So oh, that disappeared. Okay, so this is cache. Your computers at this point have three levels of cache between them and the RAM that they're talking to, and if it's 1x to talk to level one cache, level two cache is about two times of slow, and level three cache is ten times of slow, and talking to RAM itself is about 15 times slower. So you want to be talking to cache. Every CPU user so fast that normal slow RAM can't keep up with them anymore, so we have all this elaborate caching going in between, and if we can keep the cache fed, we can keep the CPU fed, we can keep your program running. At the point that we break the cache, we're going to start slowing down your program a great deal, and that's really what's going on in the Galecmi is that the cache never gets to warm up. So let's just as an example, these are all new slides I made this morning. So let's talk about, let's say we have a program, we've got four cores, zero, one, two, and three, and we have the number two, and we're running the Galecmi version of CPython, and we're running our Fibonacci benchmark, which is using the number two a whole lot. So all of them currently have the number two in cache. So if they want to look at the number two, they can just look at it, and they've already got it accessed, they don't have to wait. But then let's say that one of these cores is going to actually do something with the number two, so it's going to py increment the number. So it's going to change the reference count. Number one is changing the reference count, it's incrementing it, and that means that the number two has changed, that memory has changed, which means that it must now invalidate the cache for all the other cores for that cache line. And that cache line is 64 bytes, which is more than enough to cover the entire long object, and so now none of the other cores have that number in cache anymore. And so the next time they want to talk to the number two, they have to go load it. Someone tells me that they can actually talk to the other core and maybe pull it, but it's still a lot slower than simply having it in cache ready to go. So this is happening constantly. Any time that you examine an object in CPython, you change its reference count. Any time you change its reference count, you are changing the memory. Any time you change the memory, you are invalidating the cache for all the other cores, which means that the more cores you add, the slower you go. And that's what I'm observing in my numbers. So there is a solution for this, or at least a combination of approaches for a solution. There is a technique called buffered reference counting. We're going to use this in combination with something else. So this is how it works now conceptually. These blue boxes at the bottom, these are supposed to be cores, and this lighter blue box with the O, that's representing an object O. So all of them are talking to this O directly. So right now, if you want to examine an object, you increment its reference count. When you increment its reference count, you just go and do it. You reach into the object, you change the number. That means that we have to synchronize that across cores. So we're using this atomic anchor and decker, which is slow. We'd like to do something a little bit faster. So why don't we change it so that we can use... If we could change it so that all changes to reference counts were done from a single thread, then we wouldn't have to use atomic anchor and decker anymore. We could just use what I would call unsynchronized anchor and decker. It'd be a lot faster. We can do that. So all we do is we change it so that instead of writing the reference count directly, we write into a law, a big memory buffer that just gets reference count changes in it. So every time you want to change the reference count on an object, you don't change it directly. Instead, you write in a log, you say, O, add one to the reference count. You just write that into the log, and you don't worry about it. And meanwhile, there's this other thread, this fourth blue box where I wrote commit. That's the commit thread. That's the guy who's going to actually make the reference count changes. So he walks the log and sees, oh, I should add one to the reference count for O, and he just goes and does it. But he's the only thread making reference count changes, so he can use unsynchronized anchor and decker. That's great. The problem is, all we've done is move the contention. Now instead of having contention around the reference counts, we have contention around this log. So we need to lock and unlock the log. We really haven't solved any problems. But we can fix that. So let's go to a single log for thread. Now when thread zero wants to increment the reference count on O, it writes into this reference count log. And then the commit thread comes along and makes that change. Now we have a single log for thread, and we have a single thread making the changes. There's no synchronization overhead hardly at all. We need to have a little bit when we swap these buffers around. That's great. Now we have an ordering problem. Let's say that thread one is running along, and let's say that our object O is stored in a list. And this is the only place where it's stored. And all the reference counts have settled out. So there's a reference count of one on O right now, and that's the reference where L, the list L is holding a reference to that object. So thread one comes along, and it says, oh, I'm going to iterate over the list and just print everything in it. And then thread zero comes along later, and it says, oh, I'm going to clear list L. This means that the reference count log for thread one is going to increment and then decrement. And then later, the reference count log for zero is just going to decrement. The problem is, what if we process the log for zero before one? We're going to decrement the reference count. I already told you the reference count was one, so it's going to drop to zero. We're going to deallocate the object. And now we're going to process the commit log for one later, and we're going to explode. We're referencing an uninitialized memory. It might have been freed. It might be another object. Some crazy things are going to happen. It's not a good idea. We can solve that, actually. By the way, I want to make it clear. If you were saying, well, what if you just swap those and you did zero in front of one? That's not a general solution, because you could have a mirrored thing across two threads. You have two lists, two objects, each thread increments over one of the lists, and then clears the other one. You can't solve that by reordering the operations here. What you can do is consider that any two operations of Inker and Decker, if you have two operations, one of them is Nicker and Nevecker, the other one is Nicker and Nevecker, can you swap them? And the answer is, in almost every case you can, if you have two Inkers, you can swap them. That's harmless. If you have a Decker followed by an Inker, you can swap them. That's harmless. The only time you have a problem is if you have an Inker followed by a Decker. If you swap those, you might have an incorrect program now. So with this observation, we don't have to preserve very strict ordering on the operation of Inkers and Deckers. So we can do this buffered reference counting a lot cheaper by just having two different logs for each thread. One is an Inker log and one is a Decker log. And all we need to do is be very careful that we process all of the Inkers before we process all of the Deckers, and now our programs can run correctly and we have almost no logging. So this solves the problem of having atomic Inker and Decker around reference counting. We still have the problem about clearing cache lines, so we can solve that too. There is a technique, Thomas Wooter's actually got this working in the Galactomy thread. It's not ready yet, I think. And he was taking in a kind of a different approach. We had this idea of having a different reference count for every object for every thread, and then there would be no contention. I'm not optimistic that that's actually going to work in the long term, but this is going to help for buffered reference counting. What we do is we take the object O and we break it into two pieces. We have the reference count separate from the object. And then we push them apart in memory so they're not next to each other. Now the reference count is going to be on a different cache line than the object. If we combine that with buffered reference counting, now we have a single thread. That's committing the changes, and it's making these changes to memory that is way far removed from the object itself, which means that we're not invalidating any of these cache lines anymore. At that point, I'm pretty optimistic that we can get a lot of this performance back. So remote object headers, Thomas said he had working, so I'm optimistic that that'll work when it comes time to work with it. I've been trying to do buffered reference counting, and fundamentally, C Python is allergic, as it turns out, to not having reference counts being accurate in real time. So it doesn't work right now, and I'm going to have to have my head down and debug it for a week, and I just haven't had the week to spare recently. Once I get that to work, I'm pretty optimistic that the Galactomy is going to get a lot faster. So we're going to go after that. There's an idea to make objects immortal, or specifically reference counts that are immortal. If we had an immortal reference count, then we're not changing the memory, which means we're not invalidating cache lines. That can make things faster. Unfortunately, it adds an if statement to basically every anchor in Decker. It's hard to tell without doing an experiment. Thread private locking, the idea here is that most objects never escape the thread in which they were created. So if you create a dict, and you only ever use that dict on the current thread, then you really don't need to do the expensive locking operations around it. It's only when the dict was ever used by a different thread that you would have to actually really lock it and unlock it. And so if we could lock objects in such a way that the locking was basically free when it was thread local, we could get a bunch of performance back. And I have an idea for how I think I can get that to work. I'm going to have to talk about garbage collection someday in the Galactomy branch. But again, it's going to be quite a ways away. But in order for it to be code that people can depend on, CPython is going to have to support garbage collection. I think there are a bunch of techniques for garbage collection that support lockless concurrent access. It's super advanced stuff. I completely don't understand it. Current CPython garbage collection is basically stop the world garbage collection. That seems like it's acceptable. And I think I can get that to work. So I think the initial approach is going to be stop the world. And then if we get this all to work and CPython has this Galactomy branch, that's actually a viable thing, then the super brain-y technologists can come along and fix my garbage collection. One idea, by the way, for making garbage collection not be so expensive, again, there's all this locking involved around it. I think we could do the same thing with buffer reference counting. We could also have buffered tracking and untracking of reference counted objects. Just track this object, untrack this object, write it down in a buffer and have a commit thread that commits them later. Finally, one guy, Eric Snow, I think, suggested that as a way of mitigating the breakage involved around C extensions, we could have the ability to auto lock C extension C called it. Where whenever you called into a C extension, there would be an implicit lock involved that only one would prevent more than one thread from running inside of the C extension at a time. And that can probably get a lot of C extensions up and running very quickly. Again, this is going to be way far down the line before we're going to be ready to look at things like that. So my final thought for you is the journey of 1,000 miles begins with a single step. The performance looks terrible right now, but this is simply, there's no way to get rid of the gill without starting to get rid of the gill, and this is what starting to get rid of the gill looks like. So I'm still optimistic, even though the numbers are terrible, I'm optimistic that in the long run this is going to work. Thank you. So this is, I think I have about five minutes left for questions. Thank you very much, DS Dad. And let's see, we have a question over here. You try also with other things other than Fibonacci, something more complex computationally, maybe. No, nothing complicated. So again, so as it stands, I've added locking around the dict object and the list object. So the dict is safe to use, the list is safe to use. Number like integers and floats are immutable, so those are safe to use. Anything that's mutable and not in the list that I just said isn't safe to use inside of the Galactomy right now. So if you try and do a computation with a set object, it's just going to blow up. So I haven't done any other programs because I didn't think they'd be all that interesting. And again, this is early days anyway. The really, my hope is that there's a lot of work to be done around the Galactomy adding safety to these other mutable objects like sets and byte arrays and all of these sorts of things. And once we got all of those objects to be safe, then we could run any C Python program and we could test that. So that's really where I've spent my time instead. Yes? You might correct me if I'm not right. The Stackless approach a couple of years ago, wasn't that also an approach to remove the Gil and can you compare that? No, Stackless never attempted to remove the Gil. Stackless, the original concept around Stackless was an original original like a long time ago. Stackless has been around for a long time. The original idea with Stackless was if you have a Python program, let's say that it's heavily recursive, you run out of Stack and then you get a Stack exception. I don't remember what the exception is. If we, because the way that function calls work in C Python is that they're actually implemented using C function calls. So every time you make a function call in Python, it turns into about four function calls in C and that's building up the C stack and then they eventually blow the C stack and you're out of memory. If we could separate those two so that whenever you made a function call in Python, all it did was use heap memory, then we could make function calls all the live long day and we never run out of Stack. And then all the context for a function call lives in the stack and now we can very easily switch between function call stacks, which means that we can have coroutines. And so that was kind of the direction that Stackless was going, was just separating the C stack and the Python stack and they haven't used that technique for a long time. They actually do these crazy stuff where they actually take the C stack and they copy it off memory and copy it over somewhere and then they use some language to change stacks like the stack pointer and the instruction pointer and jump into another coroutine. But Stackless is more about coroutines anyway. It's never been about removing the gil. So with the approach of, for example, async.co and Twisted and all those asynchronous networking frameworks that tend to handle their own, they don't use threads basically. So with the approach of a gilectomy-based C Python, so you would run like an async.co reactor in or async.co event loop like in each thread and then what sort of overhead would you be looking at just in theory for those reactors that never ever talk between threads? Well, the theory is that these would be completely divorced and adding more cores would make your program scale linearly. In practice, I don't think we're ever going to get there. So the answer to that question is the answer to all the other questions about performance which is that the gilectomy becomes interesting at the point at which you can add cores to a program and it gets faster rather than slower. And again, it's going to be a long time before we get there. In general, how does the gilectomy effect Twisted and other asynchronous programming things? I can only think that it would be good for them, just like every other program in particular. That sounds like a reasonably parallel program. These things should run in parallel and the reason that we run them on multiple cores, the reason that we don't run them on multiple cores right now is because of the gil, but they're already basically parallel operations anyway. You're going to have to eat the locking overhead, of course, but you're going to be able to have multiple programs, multiple threads running simultaneously on the same code base with the same local data store, all the local objects that are in CPython. So I think it's like, I'll put it this way, if it doesn't make your program faster, then switch to the single-treaded version and you'll be happy. Thank you. Okay, one more question and Larry will be on the core developers panel later. I'd be happy to answer questions. Go ahead. I'd be happy to answer questions about the gilectomy during the core developers panel, which starts at 3.45 today and I'm chairing, so I'm forced to attend and stay for the whole time. Thank you very much for the wonderful talk. Have you considered keeping C extension compatibility with, for example, like a global interpreter lock just for C extensions like with reader-reader locks? Well, I've considered it. It doesn't work. The problem is that if you had a global lock that you just used for C extensions, you have code that isn't paying any attention to it, it's going to be changing state. The C extension expected that the state doesn't change from underneath its feet because it's holding the lock right now, now your program is incorrect. So it's just not a, it's a non-starter. Okay, thank you very much, Larry. Let's give him a big hand. Thank you. Thank you.
Larry Hastings - The Gilectomy CPython's GIL means your Python code can only run on one CPU core at a time. Can we remove it? Yes, we can... in fact we already have! But is it worth the cost? ----- CPython's "Global Interpreter Lock", or "GIL", was added in 1992. It was an excellent design decision. But 24 years is a long time--today it prevents Python from capitalizing on multiple CPUs. Many people want us to remove the GIL. It turns out, removing the GIL isn't actually that hard. In fact, I already removed it, in my experimental "gilectomy" branch. But the GIL is one reason CPython is so fast! The "gilectomy" makes CPython shockingly slow. This talk will discuss the history of the GIL, how the GIL helps make CPython fast, how the "gilectomy" removed the GIL, and some ways we might be able to make the "gilectomy" version fast enough to be useful.
10.5446/21175 (DOI)
Okay, let's continue. I'm pleased to introduce the engineer, Liana Bagratze, who talks about Learn Python the fun way. Hello, everyone. Thank you all for coming to my talk this evening. Today I'm going to talk about one interesting type of tools that can be used in education. And I guess some of you definitely came here today to find out how to learn Python the fun way. Okay, let me get started then. First of all, a little bit about myself. My name is Liana Bagratze. I am a software developer from a beautiful Russian city called St. Petersburg. I work for JetBrains. And as part of my work, I am involved in development of PyCharm ADU, which is educational edition of PyCharm IDE, specially designed for Python learners and educators. Because of my work in PyCharm ADU project, I am in constant search for interesting tools and ideas for learning and teaching programming languages. And I'd like to say that Python community does awesome job trying to make Python available for anyone and providing resources for those who want to learn it. Please raise your hands. Those who somehow involved in Python in education. So that's you who do this awesome job. But no offense, but going through these resources could be a rather boring process for some people, especially for kids. But what if we could forget that we actually trying to learn something new and just have fun instead? What if we could continue improving our skills while doing something fun? And we could do this. Playing games has been proven to be one of the most effective ways for us to learn. First of all, it decreases the fear of failure. I'm sure that all of you have met people who thought that they were too stupid for programming. But I have never actually met anyone who thought that they were too stupid for Counter-Strike or Pokemon Go. And also, if you fail a test, it might have serious consequences and it might be rather disappointing. But if you fail a level in a game, there is nothing scary about it. You can just start again. In a game, we often get an instant reward. Once you have completed a level, you get an achievement. And it's nice to have a lot of achievements, isn't it? When we play a game, we usually try to win. And competition is the moving force of progress. And at last, games often provide good visualization, which makes it easier for us to master hard concepts. At this point, playing games seems to be a very good way for us to learn something new. And of course, I'm not the first person who had an idea to apply to programming. And today, I'm going to show you three projects that are my personal favorites, just to give you an idea of how it works. And I'm also going to tell you how you can help these projects other than donating. The first project is called Code Combat. In this game, you help a hero to achieve some goals on each level. It is insanely cool for children and people with no little programming experience. They say on their website that if you want to learn programming, you need to write a lot of code. And that's definitely true. But their job is to make sure that you're doing this with a smile on your face. I like these words, but let's take a look how they do that. So this is one of the levels in Code Combat. Our hero is stuck in a room with fireballs, and we need to survive. How can we accomplish it? Well, we have some equipment. It provides comments that our hero can do. For example, simple boots allow us to move to different directions. And later in the game, we will get some sort of advanced boots that will allow us to move to the specific coordinates. We also have a sword, but that's not very good sword, actually, because you need to hit an enemy twice to kill it. When you first see some level, they already provide you with a code sample. And your job is to modify it. Sometimes you need to add some lines, and sometimes you need to modify conditions or something else. You can also run it and see what goes wrong. Let me show how it works. At first we see that our hero dies because he moves only to the right. Then we fix our code and run it again. This time everything is okay. And yay, we get some XP and gems that can be spent to buy better equipment. I also like that they can in some way analyze your code. For example, in this case, I misspelled the name of one of my enemies, and they told me that I made a typo so as I don't get frustrated because I can't find such a stupid mistake in my code. If you are interested in this project, you're very likely, because there are plenty of ways for you to help it. This is by far the best contribution guide I've ever seen. It's a screen showed from code combats, and they represent their contributors as game characters. For example, they call coders arch majors. The project is 100% open source, but it's written in cofiscript, so it's not that easy for Python developers to contribute with code. But you can actually complete all the levels with cofiscript too, so you can learn cofiscript with the code combats help, and then pay back to it by contributing your code. But you can also help in other ways. First of all, you can help with translation. Code combats has translation to many languages, and this fact is pretty impressive because we don't have so many programming games that are translated into languages other than English. This fact, okay? So there is still a lot of work for you to do. A lot of levels hasn't been translated yet. But the best part is that you can actually create new levels yourself. It's not as easy as I'd like it to be. They still have special editor for creating levels, and you can express your creativity and add some bonuses to your karma. The next project is called Coding Game, and it is known for contests that they organize. In these contests, you can compete with other developers in some sort of turn-based games, where you need to write a successful strategy. If you manage to do this, the chance is high that you will get a good price, or even will be invited to a job interview. If anybody is interested, the next contest will be held in September, so you have enough time to prepare and sign up. But let's now take a look into their onboarding puzzle that explains to new players how the whole project works. Each puzzle has a goal. Sometimes they also give you nice synopsis with spaceships or something like that, and we also can see what our code actually does in this visualization window. The editor is prefueled with the code that retrieves all the needed information from the standard input, so you can concentrate on the code that really matters. Once you've done with the code, you can run provided test cases and see if your code works good or not. For these test cases, they show you all the input and the expected output, and you can also see what exactly goes on on visualization. I'd like now to show you how it works, but at first I want to explain what you need to do in this puzzle. You need to retrieve coordinates of your enemies from the standard input, and on each turn you decide what enemy you are going to shoot. But they actually already tell you that you need to select the closest enemy to survive. Okay, I've already rolled the correct solution, and let's run the provided test case. Yeah, it looks like we survived. Let's now spoil our code and let's select the closest enemy instead. Run it again. And we are dead, because the very first spaceship got to us and we died. This project wouldn't be so cool if there were no community around it. It's really nice that you can view other people's solutions and learn from them, but they also have so-called community puzzles that you can find in the community puzzles category, and you can write your own puzzles. Maybe you can come up with the idea how to introduce the algorithm for finding the array maximum for people with some interesting story. Okay, the last project is called Check.io. You can't have that impressive visualization, but it still has the huge content base and the friendly community. The tasks here are divided into islands, and each task actually is called a mission. Each mission has three different states. Once you've done with solution, you can publish your code and see what other people think about it. Maybe you will be even lucky enough to get your code reviewed. Let's now dive into the mission called Medium. Your job here is to write a function called Check.io that returns the median of a list of numbers. Once we realized what we need to do, we can try to solve this task. Why not? And again, we see yet another version of text editor. This time, the editor is rather limited. It has no code completion and instant error highlighting, which would be helpful. They also have little task description to window, but they can provide you nice hints. Check.io provides hints in the form of an actual conversation between two people where you ask questions and more experienced developer answers you. In some tasks, they also have nice visualization feature for you to test your code on the actual data. For example, in this case, you can change the data directly on this picture and they will show you with the orange line what median is and you can see if your code works. Let's now see how this visualization works. First, we will try to check the solution that always returns the first element of a list. Then we magically type the correct solution and try it on different input data. Okay, let's change the data. Yes, it seems like it works. As I've already said, Check.io's text editor is rather limited, but the good news is that there is a plugin for PyCharm that allows you to complete Check.io missions. In addition to the ability to write solutions in PyCharm and post them directly to Check.io without copy pasting it back to the browser, you can view other people's solutions directly in PyCharm, change them and play with them. At first, all the missions available at Check.io were created by the team, but now anyone can create their own missions. In order to do that, you need to clone a GitHub repository, write your mission there and suggest it back to the Check.io team. After review, your mission might become accepted and might become available for anyone. There is also initiative to translate Check.io languages other than English. You can also propose your translation in form of full request. In conclusion, I'd like to say that even though there are a lot of great resources for learning Python, there is one interesting direction in educational tools that can actually change the way we learn. I've seen three great projects that attempt to do that and I encourage you to try them and maybe to contribute to them. But I also want to know that it has been plenty of time since I proposed this talk and no great games have gained my attention since then. It means that there is a lot of space for creativity and for you to invent something cool in this area. For example, I definitely would enjoy playing a game that teaches me Django or Flask or NumPy. That's all. Thank you again for coming. And don't hesitate to find me at PyCharmboost. Thank you. Okay. Someone wants to ask questions. Questions. I don't know. Okay. One moment. Hello. Can you talk a bit more about Python as your idea because I was not aware about it and I don't know exactly what it does. I think you better come to me at PyCharmboost and I can make a demo for you. Okay, great. Thank you. Okay, another. The last. Thank you very much. That's very interesting. Can I ask where you're going next with this? Anything that you haven't told us about that you've got in your plans for the future? You mean according to PyCharmboost, do you end gamification or just... Anything that's in line with your aims that you talked about about making this more fun and more accessible, more easy to learn especially for... I think that I should do what I can do and I should implement these ideas into PyCharm IDEA. And I intend to do that. Okay, thank you, Diana, for two answers and for your interest in this conference. Thank you all again. And thank you very much for coming. Thank you.
Liana Bakradze - Learn Python The Fun Way Programming is one of the most important 21st-century skills and tons of different online and offline resources can help you to master it. On the other hand, playing games is really effective way for us to learn and it's also the most fun. But is it possible to learn real programming language like Python by playing a game? In this talk I'll show you some projects that allow you to achieve that. I also want to inspire you to help such projects and to suggest ideas how to do that. ----- Programming is one of the most important 21st-century skills. It doesn't only provide promising career opportunities but teaches how to reason logically, systematically and creatively. Code readability, rich standard library, straightforward syntax and other features make Python a great language for teaching beginners how to program. Python community is very supportive and friendly to newcomers and does awesome work to make Python available to everyone. Tons of different online and offline resources can help you to master Python programming. Problem solving is the classical way of learning how to code. But it can be boring for some people, especially for kids. On the other hand, playing games is really effective way for us to learn and it's also the most fun. You can find different games designed to teach basics of programming, but most of them use special visual environments and don't teach real text based languages. But is it possible to learn programming language like Python by playing a game? In this talk I'll show you a few projects for different age and levels that allow you to achieve that. I'll pay attention on methods that are used to teach programming. I also want to inspire you to help such projects and to suggest ideas how to do that.
10.5446/21176 (DOI)
Good afternoon everyone Welcome to this afternoon session. I'd like to introduce Lorena Mesa. She's a platform engineer at Sprout Social in Chicago and she's a Star Trek fan and she's gonna talk to us about spam and natural language processing Hello So real fun fact I have a loud voice is this too strong or should I be a little quieter? Louder? Oh, this is great. I can be loud. All right. Thank you so much for joining me tonight This afternoon. I should say the name of this talk is is that spam in my ham? Subtext a novices inquiry into classification So as my announcer ready said my name is Lorena Mesa and as you can see I'm a huge star truck fan So live long and prosper Apart from that I'm here from Chicago a little bit about me and why I wanted to chat on this topic I'm actually a career changer. So a few years ago. I came from being a data analyst in the social in the social science space specifically I worked at Obama for America doing data governance and I then switched into doing software engineering about three years ago Some big questions that were driving me at the time are Captured in this talk, but some other things I do. I love Django girls. I helped with the workshop yesterday It's a glorious glorious thing if you have the opportunity to mentor Please do if you would like to sign up for another one, please do it as well I piloted Chicago is a group that I founded in Chicago And I recently was voted to the board of directors for the Python software foundation, which is very exciting So I'm gonna chat a little bit about this great experience that we've all had before I Think we might have all had some kind of email at some point in time where we get Something that flutters into our inbox and it has language like de-junk and speed up your slow PC And of course we have we we would trust an email that comes from AOL underscore member info at emails Yes with a Z on it dot AOL dot com and of course I'm gonna trust anything that tells me this is free This is great. You really should do it So I think when we see emails like this, we know visually just by looking at it that it's a piece of spam We know it's junk. We don't care about it. We don't we ignore it So how do we move from saying I know it when I see it to saying I can programmatically detect what a piece of spam is by using Python So in today's chat, we're gonna be thinking about three questions One what is machine learning to how is classification a part of this world and three? How can I use Python to solve a classification problem like spam detection? This tech this chat is going to be really focused on a beginner understanding of machine learning So if you are looking for more intermediate and advanced talks, I definitely know this would be a great conference to check out some of that But we're gonna really be taking this from the lens of a beginner So machine learning if you were to follow the emojis on the left-hand side at the top left would be me Confused not sure what machine learning is. I'm like is it a robot is it Johnny five Johnny five being a superhero from a children's movie. I loved when I was a little kid Who's super quirky can arch their eyebrows and come save the day? Well, I don't really think machine learning is Johnny five. So let's go ahead and think a little bit about what machine learning is One of the things I like to do when I begin working in a new problem space is I try to find some language to actually gravitate myself to Understand what types of problems I will be solving If I were to look around for some language defining machine learning I might find something like this some discussion saying that there's pattern recognition computational learning Artificial intelligence what's going on? I don't know what that is But there is a part of this that does make sense to me the study of algorithms that can learn and make predictions on data I like data. I like algorithms. Tell me more So I think a better way we can think about machine learning is to borrow some language from Tom Mitchell the chair of machine learning department at Carnegie Melling He wrote machine learning which is kind of a quintessential text for folks who want to start learning about machine learning And he says we can think about machine learning in three kind of parts We can say a computer program is said to learn from experience E With respect to some task T and some performance measurement P if its performance on T as Measured by P improves with experience E Okay, so we have a task we have experience. We have a performance measurement. I can do this This makes sense to me So when I think about experience and how do I know what I know? Well, I'm a human and when I when I being a human the way that I know what I know comes from my memory I Have memories stored up that teach me things about what I like what I don't like what I should do what I shouldn't do So maybe as a kid and I was a very hyperactive child I would be running around like a maniac all the time because I had to be super fast But what happens when you run around as a little kid and you're growing in your body? You might be klutzy you might fall and skin your knee how many times you have to skin your knee and elbows for me It probably took quite some time for me to learn. I shouldn't run around like a like a maniac I should walk around like a normal person so I don't hurt myself that pain was a teaching experience for me Likewise when my grandmother was in the kitchen making tamales because I love tamales I would always trying to be sticking my hand on the stove and more than once I definitely burned my hand The idea of putting your hand on red hot coils not very smart So over time I learned to recognize that as a sign. I shouldn't do that So when we think of experience as a human we may think of our memories What does that mean in different problem spaces? If I were to ask the question, what is the historical experience of the stock market? Well, I could say if I want to understand what a piece of stock has done historically I might go look at what the records tell me about the price of that stock two years ago on July 17th One year ago on July 17th, and you know depending on how far back I want to do some analysis I have historical data that can tell me something about the historical performance of that stock So we have human memories. We have some memories there But maybe in other spaces again, we want to go to historical data that can teach us something So coming to machine learning and classification, what does experience actually mean? Let's frame this in Mitchell's framework. Our first problem is going to be identifying a task For us we want to classify a piece of data So our question is is an email spam or ham and the idea here of ham is just anything that's not spam It's cute. It rhymes. So spam or ham. That's our task Our experience we're gonna have a set of labeled training data Essentially, what does that mean? We have a collection of emails and we have a label that's that is saying that the emails either Ham or spam so we have a collection of emails that we already know is one thing or the other And then our performance measurement is the label correct So what we need to do is be able to verify if emails are indeed spam or ham So thinking about a classifier that we can use we can think of naive Bayes Now if Bayes is a type of probabilistic classifier I love this image because I really want to know who's the person that has a neon light of the Bayes theorem like in their office or in their front window. I don't know who that person is but I applaud you You are really great So now you Bayes comes to us from stats theory. It's based on the Bayes theorem. No surprise One of the key things with the Bayes theorem is when we talk about the likelihood of events The key thing here to note is that we treat events as independent of one another That's where the naive assumption comes from when we say we're going to be using a naive Bayes classifier So for those of us who may not remember exactly what it means when we talk about independent and dependent events Let's have a quick refresher So if I was going to ask you what's the probability of flipping a quarter six times in a row and getting heads How would you go about solving that problem? Well, let's think about it on the first flip. I have two outcomes I have heads or tails so the likelihood of getting heads is going to be 0.5 The second time I flip that 0.5 Third time and so forth is going to be 0.5 So the likelihood of flipping a quarter and receiving multiple heads in a row is going to be independent of one another So when we talk about independent events, we're trying to think of the outcomes in Contrast to dependent events Let's say we're talking about horse number five on the I guess your right hand side If my question was what's the likelihood that horse number five is going to win the big derby? One of the things I would say is well, we need to think about what are what are the weather conditions? Is it rainy? Is it sunny? Perhaps we want to think about the age of the horse the health of the horse There can be other things that are that are tied up in the likelihood of Horse number five winning so in this context the probability of horse number five winning is going to be Dependent on other things for example the weather so when we talk about naive bays We we our assumption is we have independent events So when we talk about emails, we're really going to be thinking about the words that make up the emails So let's think about these words if I was going to say what's the likelihood of the word messy appearing with the word Barcelona? We're going to assume that there's no relationship that's what naive base tells us to do even though in our heads we might think that there's a relationship or Back to some really spammy language. We love What's the relationship between by and now we're going to assume that there is no relationship that the likelihood of buy is not going to be impacting the likelihood of now appearing in a corpus of words for an email So naive base and spam classifiers again our question is what is the probability of an email being ham or spam? So these the base theorem here in the middle. We've got three things. We need to kind of think of One what's the likelihood of the predictors in the class? to the prior probability of the class and three the prior probability of the predictor All of these together will help us compute the the a posteriori probability of a class So when I say class our class is here ham spam Those are the only two classes we have our predictors are going to be the words in the email itself So for example if I'm looking at a piece of content and I say okay Well, what's the likelihood of a predictor being in the spam or ham class? We can say if I'm looking at the word free we can think of it as well 28 out of 50 spam emails Have the word free we will do this for each word in our Email and we will find the likelihoods of all the predictors and multiply them together We also then need to consider the prior probability of the class So given the entire collection of data we're looking at how many of them are of one class and how many of another So for spam if we have 150 emails we're working with we can say 50 of those documents are Spam so 50 out of 150 and then the prior probability of the predictor We're here saying well how many times has the word free appeared in all of our emails? Let's say it's 72 out of 150 and there you go So these the base theorem is basically frequency tables how many times has this thing appeared? How many times has it appeared? How many times has it appeared in the class how many times has this class appeared in the collection of things that we're looking at? Great we've we've made some calculations. We found we found some values between zero to one. How do we know which one to pick? Pretty easy whichever one has the higher maximum a posteriori probability So the reason why we would say a posteriori here is we're not looking at anything new We're looking at historical data things that have already happened Once we've made a calculation for Class ham and for class spam We simply just pick the larger of the two and we say this email is going to be Either ham or spam pretty simple So why naive bays? Well, I think just walking through this we can arrive at an answer. It's pretty straightforward It's as simple as frequency tables. I think we can all do this together It may seem a little bit daunting at first, but once you start realizing the application of it you can see that it's pretty straightforward So for the context of if you are starting to think about classifiers and problems you want to start looking at I would say this is a great one to start with the math is accessible And while you can use other algorithms, we will talk about some of the limitations in a moment This is a good one to start with So that's great, but how do I use python to detect spam? Okay, well, I cheated a little bit. I didn't do all my own data collection and munging and cleaning As fun as that is I instead went to find a data source out there that already was cleaned and labeled for me and where did I get it? I got it from Kegel in the classroom. So this is a this is a Website that has competitions. So the classroom component is more of their teaching problems. They have open competition problems as well But I loved that my data was cleaned and labeled and I could just get right to work building a thing So in our example here our training data has 2500 emails 71721 of them which are labeled one as ham and the balance labeled as spam, which is zero So the labels themselves are just in a csv. We have an id and we have the prediction zero or one Pretty straightforward and the that's a little grainy. I apologize, but the emails themselves are collections of text with some HTML in it So what are we going to use when we write our very very simplistic naive Bayes spam classifier? We're going to use these three things. We're going to use email. It's going to go ahead and parse our emails into message objects We're going to use LXML because as I said those emails have some html embedded in it And right now all I care about is the is the words themselves So I want to strip that stuff out and then we'll use nltk natural language toolkit and that's going to help us to filter out stop words So let's go ahead and get to it and train the spam filter So the training of the Python naive Bayes classifier when when I say train We're going to go through these steps. The first thing we're going to do is we're going to tokenize the text We will explain that in just a moment One thing I do want to say is when we look at all the corpus of words in an email I am not treating words like shop and shopping as the same word. You can actually do that That's called stemming. So that would be like a bonus feature. I encourage you to go try that on your own So I didn't do that for this example. So we're going to go ahead. We're going to tokenize our words which That we're going to do that for each email that we process. We want to then Keep track of the unique words that we see of all the documents that we process This will come into effect to help us with zero word frequencies We are going to then increment the word frequency for each category. So our category is here being ham or spam We're going to increment the category count Which again is that prior probability of the classes that we needed to take into account And then we're also just going to keep a track of how many words are in each category and It's good to know how many training examples we've actually processed. So that's the last step So training is pretty much Going to start with this Tokenizing text into a bag of words That's what it is. It's a bag of words. So essentially, uh, this is very simplistic I've kind of trimmed it down a little what we want to do is we want to pull out the words This is already after we've removed the html that's embedded and we're going to say hey for each word in our text Let's go ahead lowercase the word We're going to say if it's a word because why not? And we're going to say as long as this word isn't in our in the corpus of stop words for the English language Let's go ahead and keep it. So stop words are words like the and or words that have may appear May appear often but may not provide us a lot of that value and thinking about If this thing is going to be uh spam or not. So you can get that from nltk. I'm glad I didn't have to compile that We go ahead. We do this for each email and now we have a bag of words So remember that zero word frequency thing I was talking about So well, let's think about this. So I've done my training and I have a new email in this email that I'm looking at that I'm trying to classify I have the word free But problem I've never historically have seen the word free in the spam collection of emails that I've looked at So what's going to happen when I calculate the likelihood of all my predictors? I'm going to get zero So to offset that what we can do is we can add a small constant Like which laply smoothing permits us to do and that allows us to have a small offset so that it doesn't throw our math out the window So let's talk about classifying All right, so this is a giant wall of text But I just wanted to point out that it's quite literally iterations and countings Dictionaries that's all this is there is no black box magic here Uh, essentially what we what we do in the classifies we say for each category that we're going to To create this aposteria probability. We want to go ahead find the probability of all the predictors We want to then multiply that by the prior probability of the classes itself And we're going to pick the one that has the higher value and that's what we classify the email as Not very magical So in the get predictors probability if we see something we haven't seen before we're going to go ahead and Then add a value of one to that And this point right here about floating point underflow when you are doing computations where you really care about having very precise Uh decimal points you're going to need to use specific objects. You could use a log instead But in this case I use decimal objects And there is a note here which you probably can't read I will share these slides Which comes from the stand for natural language processing description about how to handle um doing a floating point Uh computation and they said use decimal. So that's what I went with So okay performance measurement I've classified I've picked a thing. How do I know how well I did? Okay, so I go ahead my detector says let's train and evaluate What I eventually come out with is I have 223 that are correct 27 incorrect My performance measurement is about 89 percent as a small footnote The idea of about 90 accuracy I believe is a benchmark We obviously can do better here and we'll talk about what doing better can mean in a moment So the idea of how to split up our training data, let's do a 90-10 split. It's Pretty much what I've seen as a standard I'm sure given different problem spaces you might want to chunk things up differently, but I went with a 90-10 split Essentially, all I did was say hey on 90 percent of my data. Let's go ahead classify Let's go ahead and train that is and then on 10 percent We're going to go ahead and classify And how do we know if the thing is incorrect or correct? Well, whatever Whatever label we ultimately assigned it check that labels dot csv see if it's correct see if it's incorrect And it's basically straight math. So that's how we got the 89 percent So some things to watch out for They're false positives. Oh, this is really fun So for example, uh, google does things really well, right? They do really good with spam filtering, but even they can have some flaws So I do actually like to sign up for pedagonia emails and this email was actually flagged a spam So we basically a false positive is when something is incorrectly identified, right? So you can run into this so we can say well when something is incorrect What's the problem is it that there are implementation because we're talking about naive bays is it too naive? One way we can also correct this we can tell google and say hey, this is actually not spam So I can actually validate the data and send it to them and they can put it into their Implementation and try to auto correct for that in the future So false positives are a thing to watch out for And some limitations with naive bays and some challenges Obviously this independence assumption is very very simplistic if I get a marketing email about Barcelona and they aren't talking about messy. I'm going to be very confused Uh, granted there are some talks about him being traded. So we shall see But obviously this independence assumption is quite simplistic. It is not the way that things work in the real world What are the side effects of that? Well, one of the things is then we're going to go ahead and overestimate the probability of the Of the label ultimately selected meaning we're going to create more binaries We're going to say it's either more to left or more to the right and how it aligns with the category label And also we can think about this remember how I said I cheated and I didn't go and label on my own data Well, here's the other thing human error this type of algorithm Classifiers are called supervised learning They they require historical labeled sets of of data to to go ahead and learn from in order to make predictions Well, human error can be prone in this data process What happens if let's say I'm a professor and I'm making use of all my student lackeys and some of them have been up all night And 10 of them looked at the same email and they all came up with different labels for it But it's in my training set that's going to be very inconsistent. So I need to think about that as well How is the labeling of the data happening? So as much as we don't like to think about data munging data cleaning data collection That's actually a really important part of the process when working with machine learning problem supervised machine learning problems So how can we improve our performance? Well We can do more and better feature extraction because while I would like to say that emails can only be identified by the Words in them. We know that's not true predicting sentiment of emails is very complicated Very difficult natural language processing is a huge field I'm not getting into that myself, but you know, we need to think of other ways we can identify spam So what are some things? Perhaps the subject is there something weird in the subject I can pay attention to What about the images is there an abundance of images in spammy emails or maybe there's none? I don't know How about the sender remember that like really cool email address with the z in it because clearly I would trust aol emails Whatever that was then again, I don't trust most aol stuff So that's another thing But you know some other ones if we were just going to think about what to go ahead and consider Other possible features we can think about capitalization Irregular punctuation things like that. Ultimately. We also want more data. So do you like data on star track and have more? Want to learn more go to kegel. They're super sweet I also would highly recommend sarah gido's introduction to machine learning with python She's a great data scientist at bit.ly and I've heard great things about this And also your local friendly python user group. We love talking. We love learning together talk to people here There's a great talk after this talking more about machine learning stay for it So thanks And if anything I hope what you may have learned is correlation may be causation or causation may be correlation I don't know so um we can implement a thing but the question then Comes to how do we interpret those results? And that's where I challenge you to go ahead and try some things out Thank you so much y'all Any questions I did such a great job no one has any questions All right cool Well if you do have questions, um, I'll be hanging out in this area out here for a few minutes But like I said, I do want to hear the next talk. So I'll be around my name's Lorena. Please reach out and say hi It's a pleasure to be here. Thank you so much for listening Thank you
Lorena Mesa - Is that spam in my ham? Beginning programmers or Python beginners may find it overwhelming to implement a machine learning algorithm. Increasingly machine learning is becoming more applicable to many areas. This talk introduces key concepts and ideas and uses Python to build a basic classifier - a common type of machine learning problem. Providing some jargon to help those that may be self-educated or currently learning ----- Supervised learning, machine learning, classifiers, big data! What in the world are all of these things? As a beginning programmer the questions described as "machine learning" questions can be mystifying at best. In this talk I will define the scope of a machine learning problem, identifying an email as ham or spam, from the perspective of a beginner (non master of all things "machine learning") and show how Python can help us simply learn how to classify a piece of email. To begin we must ask, what is spam? How do I know it "when I see it"? From previous experience of course! We will provide human labeled examples of spam to our model for it to understand the likelihood of spam or ham. This approach, using examples and data we already know to determine the most likely label for a new example, uses the Naive Bayes classifier. Our model will look at the words in the body of an email, finding the frequency of words in both spam and ham emails and the frequency of spam and ham. Once we know the prior likelihood of spam and what makes something spam, we can try applying a label to a new example. Through this exercise we will see at a basic level what types of questions machine learning asks, learn to model "learning" with Python, and understand how learning can be measured.